Sample records for cross-language speech perception

  1. Effects of language experience on pre-categorical perception: Distinguishing general from specialized processes in speech perception.

    PubMed

    Iverson, Paul; Wagner, Anita; Rosen, Stuart

    2016-04-01

    Cross-language differences in speech perception have traditionally been linked to phonological categories, but it has become increasingly clear that language experience has effects beginning at early stages of perception, which blurs the accepted distinctions between general and speech-specific processing. The present experiments explored this distinction by playing stimuli to English and Japanese speakers that manipulated the acoustic form of English /r/ and /l/, in order to determine how acoustically natural and phonologically identifiable a stimulus must be for cross-language discrimination differences to emerge. Discrimination differences were found for stimuli that did not sound subjectively like speech or /r/ and /l/, but overall they were strongly linked to phonological categorization. The results thus support the view that phonological categories are an important source of cross-language differences, but also show that these differences can extend to stimuli that do not clearly sound like speech.

  2. Teaching Turkish as a Foreign Language: Extrapolating from Experimental Psychology

    ERIC Educational Resources Information Center

    Erdener, Dogu

    2017-01-01

    Speech perception is beyond the auditory domain and a multimodal process, specifically, an auditory-visual one--we process lip and face movements during speech. In this paper, the findings in cross-language studies of auditory-visual speech perception in the past two decades are interpreted to the applied domain of second language (L2)…

  3. Language specificity in the perception of voiceless sibilant fricatives in Japanese and English: Implications for cross-language differences in speech-sound development

    PubMed Central

    Li, Fangfang; Munson, Benjamin; Edwards, Jan; Yoneyama, Kiyoko; Hall, Kathleen

    2011-01-01

    Both English and Japanese have two voiceless sibilant fricatives, an anterior fricative ∕s∕ contrasting with a more posterior fricative ∕∫∕. When children acquire sibilant fricatives, English children typically substitute [s] for ∕∫∕, whereas Japanese children typically substitute [∫] for ∕∫∕. This study examined English- and Japanese-speaking adults’ perception of children’s productions of voiceless sibilant fricatives to investigate whether the apparent asymmetry in the acquisition of voiceless sibilant fricatives reported previously in the two languages was due in part to how adults perceive children’s speech. The results of this study show that adult speakers of English and Japanese weighed acoustic parameters differently when identifying fricatives produced by children and that these differences explain, in part, the apparent cross-language asymmetry in fricative acquisition. This study shows that generalizations about universal and language-specific patterns in speech-sound development cannot be determined without considering all sources of variation including speech perception. PMID:21361456

  4. Cross-language Activation and the Phonetics of Code-switching

    NASA Astrophysics Data System (ADS)

    Piccinini, Page Elizabeth

    It is now well established that bilinguals have both languages activated to some degree at all times. This cross-language activation has been documented in several research paradigms, including picture naming, reading, and electrophysiological studies. What is less well understood is how the degree a language is activated can vary in different language environments or contexts. Furthermore, when investigating effects of order of acquisition and language dominance, past research has been mixed, as the two variables are often conflated. In this dissertation, I test how degree of cross-language activation can vary according to context by examining phonetic productions in code-switching speech. Both spontaneous speech and scripted speech are analyzed. Follow-up perception experiments are conducted to see if listeners are able to anticipate language switches, potentially due to the phonetic cues in the signal. Additionally, by focusing on early bilinguals who are L1 Spanish but English dominant, I am able to see what plays a greater role in cross-language activation, order of acquisition or language dominance. I find that speakers do have intermediate phonetic productions in code-switching contexts relative to monolingual contexts. Effects are larger and more consistent in English than Spanish. Similar effects are found in speech perception. Listeners are able to anticipate language switches from English to Spanish but not Spanish to English. Together these results suggest that language dominance is a more important factor than order of acquisition in cross-language activation for early bilinguals. Future models on bilingual language organization and access should take into account both context and language dominance when modeling degrees of cross-language activation.

  5. Infant-Directed Speech Supports Phonetic Category Learning in English and Japanese

    ERIC Educational Resources Information Center

    Werker, Janet F.; Pons, Ferran; Dietrich, Christiane; Kajikawa, Sachiyo; Fais, Laurel; Amano, Shigeaki

    2007-01-01

    Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language [Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. "Infant Behaviour and Development," 7, 49-63]. In an artificial language learning…

  6. Personality, Category, and Cross-Linguistic Speech Sound Processing: A Connectivistic View

    PubMed Central

    Li, Will X. Y.

    2014-01-01

    Category formation of human perception is a vital part of cognitive ability. The disciplines of neuroscience and linguistics, however, seldom mention it in the marrying of the two. The present study reviews the neurological view of language acquisition as normalization of incoming speech signal, and attempts to suggest how speech sound category formation may connect personality with second language speech perception. Through a questionnaire, (being thick or thin) ego boundary, a correlate found to be related to category formation, was proven a positive indicator of personality types. Following the qualitative study, thick boundary and thin boundary English learners native in Cantonese were given a speech-signal perception test using an ABX discrimination task protocol. Results showed that thick-boundary learners performed significantly lower in accuracy rate than thin-boundary learners. It was implied that differences in personality do have an impact on language learning. PMID:24757425

  7. The effect of language experience on perceptual normalization of Mandarin tones and non-speech pitch contours.

    PubMed

    Luo, Xin; Ashmore, Krista B

    2014-06-01

    Context-dependent pitch perception helps listeners recognize tones produced by speakers with different fundamental frequencies (f0s). The role of language experience in tone normalization remains unclear. In this cross-language study of tone normalization, native Mandarin and English listeners were asked to recognize Mandarin Tone 1 (high-flat) and Tone 2 (mid-rising) with a preceding Mandarin sentence. To further test whether context-dependent pitch perception is speech-specific or domain-general, both language groups were asked to identify non-speech flat and rising pitch contours with a preceding non-speech flat pitch contour. Results showed that both Mandarin and English listeners made more rising responses with non-speech than with speech stimuli, due to differences in spectral complexity and listening task between the two stimulus types. English listeners made more rising responses than Mandarin listeners with both speech and non-speech stimuli. Contrastive context effects (more rising responses in the high-f0 context than in the low-f0 context) were found with both speech and non-speech stimuli for Mandarin listeners, but not for English listeners. English listeners' lack of tone experience may have caused more rising responses and limited use of context f0 cues. These results suggest that context-dependent pitch perception in tone normalization is domain-general, but influenced by long-term language experience.

  8. Early Postimplant Speech Perception and Language Skills Predict Long-Term Language and Neurocognitive Outcomes Following Pediatric Cochlear Implantation

    PubMed Central

    Kronenberger, William G.; Castellanos, Irina; Pisoni, David B.

    2017-01-01

    Purpose We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes. Method Thirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes. Results Speech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors. Conclusion Speech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants. Supplemental materials https://doi.org/10.23641/asha.5216200 PMID:28724130

  9. Cognitive Control of Speech Perception across the Lifespan: A Large-Scale Cross-Sectional Dichotic Listening Study

    ERIC Educational Resources Information Center

    Westerhausen, René; Bless, Josef J.; Passow, Susanne; Kompus, Kristiina; Hugdahl, Kenneth

    2015-01-01

    The ability to use cognitive-control functions to regulate speech perception is thought to be crucial in mastering developmental challenges, such as language acquisition during childhood or compensation for sensory decline in older age, enabling interpersonal communication and meaningful social interactions throughout the entire life span.…

  10. Early Postimplant Speech Perception and Language Skills Predict Long-Term Language and Neurocognitive Outcomes Following Pediatric Cochlear Implantation

    ERIC Educational Resources Information Center

    Hunter, Cynthia R.; Kronenberger, William G.; Castellanos, Irina; Pisoni, David B.

    2017-01-01

    Purpose: We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes. Method: Thirty-six long-term users of cochlear…

  11. Visual Speech Perception in Children with Language Learning Impairments

    ERIC Educational Resources Information Center

    Knowland, Victoria C. P.; Evans, Sam; Snell, Caroline; Rosen, Stuart

    2016-01-01

    Purpose: The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face. Method: In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with…

  12. Perceptions of Refusals to Invitations: Exploring the Minds of Foreign Language Learners

    ERIC Educational Resources Information Center

    Felix-Brasdefer, J. Cesar

    2008-01-01

    Descriptions of speech act realisations of native and non-native speakers abound in the cross-cultural and interlanguage pragmatics literature. Yet, what is lacking is an analysis of the cognitive processes involved in the production of speech acts. This study examines the cognitive processes and perceptions of learners of Spanish when refusing…

  13. Language/culture/mind/brain. Progress at the margins between disciplines.

    PubMed

    Kuhl, P K; Tsao, F M; Liu, H M; Zhang, Y; De Boer, B

    2001-05-01

    At the forefront of research on language are new data demonstrating infants' strategies in the early acquisition of language. The data show that infants perceptually "map" critical aspects of ambient language in the first year of life before they can speak. Statistical and abstract properties of speech are picked up through exposure to ambient language. Moreover, linguistic experience alters infants' perception of speech, warping perception in a way that enhances native-language speech processing. Infants' strategies are unexpected and unpredicted by historical views. At the same time, research in three additional disciplines is contributing to our understanding of language and its acquisition by children. Cultural anthropologists are demonstrating the universality of adult speech behavior when addressing infants and children across cultures, and this is creating a new view of the role adult speakers play in bringing about language in the child. Neuroscientists, using the techniques of modern brain imaging, are revealing the temporal and structural aspects of language processing by the brain and suggesting new views of the critical period for language. Computer scientists, modeling the computational aspects of childrens' language acquisition, are meeting success using biologically inspired neural networks. Although a consilient view cannot yet be offered, the cross-disciplinary interaction now seen among scientists pursuing one of humans' greatest achievements, language, is quite promising.

  14. Auditory processing and speech perception in children with specific language impairment: relations with oral language and literacy skills.

    PubMed

    Vandewalle, Ellen; Boets, Bart; Ghesquière, Pol; Zink, Inge

    2012-01-01

    This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay (n = 8), (2) children with SLI and normal literacy (n = 10) and (3) typically developing children (n = 14). Moreover, the relations between these auditory processing and speech perception skills and oral language and literacy skills in grade 1 and grade 3 were analyzed. The SLI group with literacy delay scored significantly lower than both other groups on speech perception, but not on temporal auditory processing. Both normal reading groups did not differ in terms of speech perception or auditory processing. Speech perception was significantly related to reading and spelling in grades 1 and 3 and had a unique predictive contribution to reading growth in grade 3, even after controlling reading level, phonological ability, auditory processing and oral language skills in grade 1. These findings indicated that speech perception also had a unique direct impact upon reading development and not only through its relation with phonological awareness. Moreover, speech perception seemed to be more associated with the development of literacy skills and less with oral language ability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Community Health Workers perceptions in relation to speech and language disorders.

    PubMed

    Knochenhauer, Carla Cristina Lins Santos; Vianna, Karina Mary de Paiva

    2016-01-01

    To know the perception of the Community Health Workers (CHW) about the speech and language disorders. Cross-sectional study, which involved a questionnaire with questions related to the knowledge of CHW on speech and language disorders. The research was carried out with CHW allocated in the Centro Sanitary District of Florianópolis. We interviewed 35 CHW, being mostly (80%) female gender, with a average age of 47 years (standard deviation = 2.09 years). From the total number of interviewed professionals, 57% said that they knew the work of the speech therapist, 57% believe that there is no relationship between chronic diseases and speech therapy and 97% who think the participation of Speech, Hearing and Language Sciences is important in primary care. As for capacity development, 88% of CHW claim not to have had any training performed by a speech therapist, 75% of professionals stated they had done the training Estratégia Amamenta e Alimenta Brasil, 57% of the Programa Capital Criança and 41% of the Programa Capital Idoso. The knowledge of CHW about the work of a speech therapist is still limited, but the importance of speech and language disorders is recognized in primary care. The lack of knowledge, with regard to speech and language disorders, may be related to lack of qualification of the CHW in actions and/or continuing education courses that could clarify and educate these professionals to identify and better educate the population in their home visits. This study highlights the need for further research on training actions of these professionals.

  16. Lexical Effects on Second Language Acquisition

    ERIC Educational Resources Information Center

    Kemp, Renee Lorraine

    2017-01-01

    Speech production and perception are inextricably linked systems. Speakers modify their speech in response to listener characteristics, such as age, hearing ability, and language background. Listener-oriented modifications in speech production, commonly referred to as clear speech, have also been found to affect speech perception by enhancing…

  17. Auditory Processing and Speech Perception in Children with Specific Language Impairment: Relations with Oral Language and Literacy Skills

    ERIC Educational Resources Information Center

    Vandewalle, Ellen; Boets, Bart; Ghesquiere, Pol; Zink, Inge

    2012-01-01

    This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay…

  18. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  19. Is the perception of dysphonia severity language-dependent? A comparison of French and Italian voice assessments.

    PubMed

    Ghio, Alain; Cantarella, Giovanna; Weisz, Frédérique; Robert, Danièle; Woisard, Virginie; Fussi, Franco; Giovanni, Antoine; Baracca, Giovanna

    2015-04-01

    In this cross-language study, six Italian and six French voice experts evaluated perceptually the speech of 27 Italian and 40 French patients with dysphonia to determine if there were differences based on native language. French and Italian voice specialists agreed substantially in their evaluations of the overall grade of dysphonia and moderately concerning roughness and breathiness. No statistically significant effects were found related to the language of the speakers with the exception of breathiness, a finding that was interpreted as being due to different voice pathologies in the patient groups. It was concluded that the perception of the overall grade of dysphonia and breathiness is not language-dependent, whereas the significant difference in the perception of roughness may be related to a perception/adaption process.

  20. The Role of Clinical Experience in Speech-Language Pathologists' Perception of Subphonemic Detail in Children's Speech

    PubMed Central

    Munson, Benjamin; Johnson, Julie M.; Edwards, Jan

    2013-01-01

    Purpose This study examined whether experienced speech-language pathologists differ from inexperienced people in their perception of phonetic detail in children's speech. Method Convenience samples comprising 21 experienced speech-language pathologist and 21 inexperienced listeners participated in a series of tasks in which they made visual-analog scale (VAS) ratings of children's natural productions of target /s/-/θ/, /t/-/k/, and /d/-/ɡ/ in word-initial position. Listeners rated the perception distance between individual productions and ideal productions. Results The experienced listeners' ratings differed from inexperienced listeners' in four ways: they had higher intra-rater reliability, they showed less bias toward a more frequent sound, their ratings were more closely related to the acoustic characteristics of the children's speech, and their responses were related to a different set of predictor variables. Conclusions Results suggest that experience working as a speech-language pathologist leads to better perception of phonetic detail in children's speech. Limitations and future research are discussed. PMID:22230182

  1. Factors contributing to speech perception scores in long-term pediatric cochlear implant users.

    PubMed

    Davidson, Lisa S; Geers, Ann E; Blamey, Peter J; Tobey, Emily A; Brenner, Christine A

    2011-02-01

    The objectives of this report are to (1) describe the speech perception abilities of long-term pediatric cochlear implant (CI) recipients by comparing scores obtained at elementary school (CI-E, 8 to 9 yrs) with scores obtained at high school (CI-HS, 15 to 18 yrs); (2) evaluate speech perception abilities in demanding listening conditions (i.e., noise and lower intensity levels) at adolescence; and (3) examine the relation of speech perception scores to speech and language development over this longitudinal timeframe. All 112 teenagers were part of a previous nationwide study of 8- and 9-yr-olds (N = 181) who received a CI between 2 and 5 yrs of age. The test battery included (1) the Lexical Neighborhood Test (LNT; hard and easy word lists); (2) the Bamford Kowal Bench sentence test; (3) the Children's Auditory-Visual Enhancement Test; (4) the Test of Auditory Comprehension of Language at CI-E; (5) the Peabody Picture Vocabulary Test at CI-HS; and (6) the McGarr sentences (consonants correct) at CI-E and CI-HS. CI-HS speech perception was measured in both optimal and demanding listening conditions (i.e., background noise and low-intensity level). Speech perception scores were compared based on age at test, lexical difficulty of stimuli, listening environment (optimal and demanding), input mode (visual and auditory-visual), and language age. All group mean scores significantly increased with age across the two test sessions. Scores of adolescents significantly decreased in demanding listening conditions. The effect of lexical difficulty on the LNT scores, as evidenced by the difference in performance between easy versus hard lists, increased with age and decreased for adolescents in challenging listening conditions. Calculated curves for percent correct speech perception scores (LNT and Bamford Kowal Bench) and consonants correct on the McGarr sentences plotted against age-equivalent language scores on the Test of Auditory Comprehension of Language and Peabody Picture Vocabulary Test achieved asymptote at similar ages, around 10 to 11 yrs. On average, children receiving CIs between 2 and 5 yrs of age exhibited significant improvement on tests of speech perception, lipreading, speech production, and language skills measured between primary grades and adolescence. Evidence suggests that improvement in speech perception scores with age reflects increased spoken language level up to a language age of about 10 yrs. Speech perception performance significantly decreased with softer stimulus intensity level and with introduction of background noise. Upgrades to newer speech processing strategies and greater use of frequency-modulated systems may be beneficial for ameliorating performance under these demanding listening conditions.

  2. Categorical perception of intonation contrasts: effects of listeners' language background.

    PubMed

    Liu, Chang; Rodriguez, Amanda

    2012-06-01

    Intonation perception of English speech was examined for English- and Chinese-native listeners. F0 contour was manipulated from falling to rising patterns for the final words of three sentences. Listener's task was to identify and discriminate the intonation of each sentence (question versus statement). English and Chinese listeners had significant differences in the identification functions such as the categorical boundary and the slope. In the discrimination functions, Chinese listeners showed greater peakedness than English peers. The cross-linguistic differences in intonation perception were similar to the previous findings in perception of lexical tones, likely due to listeners' language background differences.

  3. Perception of intelligibility and qualities of non-native accented speakers.

    PubMed

    Fuse, Akiko; Navichkova, Yuliya; Alloggio, Krysteena

    To provide effective treatment to clients, speech-language pathologists must be understood, and be perceived to demonstrate the personal qualities necessary for therapeutic practice (e.g., resourcefulness and empathy). One factor that could interfere with the listener's perception of non-native speech is the speaker's accent. The current study explored the relationship between how accurately listeners could understand non-native speech and their perceptions of personal attributes of the speaker. Additionally, this study investigated how listeners' familiarity and experience with other languages may influence their perceptions of non-native accented speech. Through an online survey, native monolingual and bilingual English listeners rated four non-native accents (i.e., Spanish, Chinese, Russian, and Indian) on perceived intelligibility and perceived personal qualities (i.e., professionalism, intelligence, resourcefulness, empathy, and patience) necessary for speech-language pathologists. The results indicated significant relationships between the perception of intelligibility and the perception of personal qualities (i.e., professionalism, intelligence, and resourcefulness) attributed to non-native speakers. However, these findings were not supported for the Chinese accent. Bilingual listeners judged the non-native speech as more intelligible in comparison to monolingual listeners. No significant differences were found in the ratings between bilingual listeners who share the same language background as the speaker and other bilingual listeners. Based on the current findings, greater perception of intelligibility was the key to promoting a positive perception of personal qualities such as professionalism, intelligence, and resourcefulness, important for speech-language pathologists. The current study found evidence to support the claim that bilinguals have a greater ability in understanding non-native accented speech compared to monolingual listeners. The results, however, did not confirm an advantage for bilingual listeners sharing the same language backgrounds with the non-native speaker over other bilingual listeners. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment

    PubMed Central

    PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.

    2014-01-01

    Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648

  5. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    PubMed

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  6. Patient Fatigue during Aphasia Treatment: A Survey of Speech-Language Pathologists

    ERIC Educational Resources Information Center

    Riley, Ellyn A.

    2017-01-01

    The purpose of this study was to measure speech-language pathologists' (SLPs) perceptions of fatigue in clients with aphasia and identify strategies used to manage client fatigue during speech and language therapy. SLPs completed a short online survey containing a series of questions related to their perceptions of patient fatigue. Of 312…

  7. Noise on, Voicing off: Speech Perception Deficits in Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Ziegler, Johannes C.; Pech-Georgel, Catherine; George, Florence; Lorenzi, Christian

    2011-01-01

    Speech perception of four phonetic categories (voicing, place, manner, and nasality) was investigated in children with specific language impairment (SLI) (n=20) and age-matched controls (n=19) in quiet and various noise conditions using an AXB two-alternative forced-choice paradigm. Children with SLI exhibited robust speech perception deficits in…

  8. Acoustic variability within and across German, French, and American English vowels: phonetic context effects.

    PubMed

    Strange, Winifred; Weber, Andrea; Levy, Erika S; Shafiro, Valeriy; Hisagi, Miwako; Nishi, Kanae

    2007-08-01

    Cross-language perception studies report influences of speech style and consonantal context on perceived similarity and discrimination of non-native vowels by inexperienced and experienced listeners. Detailed acoustic comparisons of distributions of vowels produced by native speakers of North German (NG), Parisian French (PF) and New York English (AE) in citation (di)syllables and in sentences (surrounded by labial and alveolar stops) are reported here. Results of within- and cross-language discriminant analyses reveal striking dissimilarities across languages in the spectral/temporal variation of coarticulated vowels. As expected, vocalic duration was most important in differentiating NG vowels; it did not contribute to PF vowel classification. Spectrally, NG long vowels showed little coarticulatory change, but back/low short vowels were fronted/raised in alveolar context. PF vowels showed greater coarticulatory effects overall; back and front rounded vowels were fronted, low and mid-low vowels were raised in both sentence contexts. AE mid to high back vowels were extremely fronted in alveolar contexts, with little change in mid-low and low long vowels. Cross-language discriminant analyses revealed varying patterns of spectral (dis)similarity across speech styles and consonantal contexts that could, in part, account for AE listeners' perception of German and French front rounded vowels, and "similar" mid-high to mid-low vowels.

  9. Working memory training to improve speech perception in noise across languages

    PubMed Central

    Ingvalson, Erin M.; Dhar, Sumitrajit; Wong, Patrick C. M.; Liu, Hanjun

    2015-01-01

    Working memory capacity has been linked to performance on many higher cognitive tasks, including the ability to perceive speech in noise. Current efforts to train working memory have demonstrated that working memory performance can be improved, suggesting that working memory training may lead to improved speech perception in noise. A further advantage of working memory training to improve speech perception in noise is that working memory training materials are often simple, such as letters or digits, making them easily translatable across languages. The current effort tested the hypothesis that working memory training would be associated with improved speech perception in noise and that materials would easily translate across languages. Native Mandarin Chinese and native English speakers completed ten days of reversed digit span training. Reading span and speech perception in noise both significantly improved following training, whereas untrained controls showed no gains. These data suggest that working memory training may be used to improve listeners' speech perception in noise and that the materials may be quickly adapted to a wide variety of listeners. PMID:26093435

  10. Working memory training to improve speech perception in noise across languages.

    PubMed

    Ingvalson, Erin M; Dhar, Sumitrajit; Wong, Patrick C M; Liu, Hanjun

    2015-06-01

    Working memory capacity has been linked to performance on many higher cognitive tasks, including the ability to perceive speech in noise. Current efforts to train working memory have demonstrated that working memory performance can be improved, suggesting that working memory training may lead to improved speech perception in noise. A further advantage of working memory training to improve speech perception in noise is that working memory training materials are often simple, such as letters or digits, making them easily translatable across languages. The current effort tested the hypothesis that working memory training would be associated with improved speech perception in noise and that materials would easily translate across languages. Native Mandarin Chinese and native English speakers completed ten days of reversed digit span training. Reading span and speech perception in noise both significantly improved following training, whereas untrained controls showed no gains. These data suggest that working memory training may be used to improve listeners' speech perception in noise and that the materials may be quickly adapted to a wide variety of listeners.

  11. Speech perception in older adults: the importance of speech-specific cognitive abilities.

    PubMed

    Sommers, M S

    1997-05-01

    To provide a critical evaluation of studies examining the contribution of changes in language-specific cognitive abilities to the speech perception difficulties of older adults. A review of the literature on aging and speech perception. The research considered in the present review suggests that age-related changes in absolute sensitivity is the principal factor affecting older listeners' speech perception in quiet. However, under less favorable listening conditions, changes in a number of speech-specific cognitive abilities can also affect spoken language processing in older people. Clinically, these findings suggest that hearing aids, which have been the traditional treatment for improving speech perception in older adults, are likely to offer considerable benefit in quiet listening situations because the amplification they provide can serve to compensate for age-related hearing losses. However, such devices may be less beneficial in more natural environments, (e.g., noisy backgrounds, multiple talkers, reverberant rooms) because they are less effective for improving speech perception difficulties that result from age-related cognitive declines. It is suggested that an integrative approach to designing test batteries that can assess both sensory and cognitive abilities needed for processing spoken language offers the most promising approach for developing therapeutic interventions to improve speech perception in older adults.

  12. Review of Visual Speech Perception by Hearing and Hearing-Impaired People: Clinical Implications

    ERIC Educational Resources Information Center

    Woodhouse, Lynn; Hickson, Louise; Dodd, Barbara

    2009-01-01

    Background: Speech perception is often considered specific to the auditory modality, despite convincing evidence that speech processing is bimodal. The theoretical and clinical roles of speech-reading for speech perception, however, have received little attention in speech-language therapy. Aims: The role of speech-read information for speech…

  13. Infants’ brain responses to speech suggest Analysis by Synthesis

    PubMed Central

    Kuhl, Patricia K.; Ramírez, Rey R.; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-01-01

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners’ knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca’s area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of “motherese” on early language learning, and (iii) the “social-gating” hypothesis and humans’ development of social understanding. PMID:25024207

  14. Infants' brain responses to speech suggest analysis by synthesis.

    PubMed

    Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-08-05

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.

  15. Longitudinal Speech Perception and Language Performance in Pediatric Cochlear Implant Users: the Effect of Age at Implantation

    PubMed Central

    Dunn, Camille C; Walker, Elizabeth A; Oleson, Jacob; Kenworthy, Maura; Van Voorst, Tanya; Tomblin, J. Bruce; Ji, Haihong; Kirk, Karen I; McMurray, Bob; Hanson, Marlan; Gantz, Bruce J

    2013-01-01

    Objectives Few studies have examined the long-term effect of age at implantation on outcomes using multiple data points in children with cochlear implants. The goal of this study was to determine if age at implantation has a significant, lasting impact on speech perception, language, and reading performance for children with prelingual hearing loss. Design A linear mixed model framework was utilized to determine the effect of age at implantation on speech perception, language, and reading abilities in 83 children with prelingual hearing loss who received cochlear implants by age 4. The children were divided into two groups based on their age at implantation: 1) under 2 years of age and 2) between 2 and 3.9 years of age. Differences in model specified mean scores between groups were compared at annual intervals from 5 to 13 years of age for speech perception, and 7 to 11 years of age for language and reading. Results After controlling for communication mode, device configuration, and pre-operative pure-tone average, there was no significant effect of age at implantation for receptive language by 8 years of age, expressive language by 10 years of age, reading by 7 years of age. In terms of speech perception outcomes, significance varied between 7 and 13 years of age, with no significant difference in speech perception scores between groups at ages 7, 11 and 13 years. Children who utilized oral communication (OC) demonstrated significantly higher speech perception scores than children who used total communication (TC). OC users tended to have higher expressive language scores than TC users, although this did not reach significance. There was no significant difference between OC and TC users for receptive language or reading scores. Conclusions Speech perception, language, and reading performance continue to improve over time for children implanted before 4 years of age. The current results indicate that the effect of age at implantation diminishes with time, particularly for higher-order skills such as language and reading. Some children who receive CIs after the age of 2 years have the capacity to approximate the language and reading skills of their earlier-implanted peers, suggesting that additional factors may moderate the influence of age at implantation on outcomes over time. PMID:24231628

  16. Perception of English intonation by English, Spanish, and Chinese listeners.

    PubMed

    Grabe, Esther; Rosner, Burton S; García-Albea, José E; Zhou, Xiaolin

    2003-01-01

    Native language affects the perception of segmental phonetic structure, of stress, and of semantic and pragmatic effects of intonation. Similarly, native language might influence the perception of similarities and differences among intonation contours. To test this hypothesis, a cross-language experiment was conducted. An English utterance was resynthesized with seven falling and four rising intonation contours. English, Iberian Spanish, and Chinese listeners then rated each pair of nonidentical stimuli for degree of difference. Multidimensional scaling of the results supported the hypothesis. The three groups of listeners produced statistically different perceptual configurations for the falling contours. All groups, however, perceptually separated the falling from the rising contours. This result suggested that the perception of intonation begins with the activation of universal auditory mechanisms that process the direction of relatively slow frequency modulations. A second experiment therefore employed frequency-modulated sine waves that duplicated the fundamental frequency contours of the speech stimuli. New groups of English, Spanish, and Chinese subjects yielded no cross-language differences between the perceptual configurations for these nonspeech stimuli. The perception of similarities and differences among intonation contours calls upon universal auditory mechanisms whose output is molded by experience with one's native language.

  17. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions

    ERIC Educational Resources Information Center

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo

    2013-01-01

    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…

  18. Effects of culture on musical pitch perception.

    PubMed

    Wong, Patrick C M; Ciocca, Valter; Chan, Alice H D; Ha, Louisa Y Y; Tan, Li-Hai; Peretz, Isabelle

    2012-01-01

    The strong association between music and speech has been supported by recent research focusing on musicians' superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association--the influence of linguistic background on music pitch processing and disorders--remains elusive. Because languages differ in their usage of elements (e.g., pitch) that are also essential for music, a unique opportunity for examining such language-to-music associations comes from a cross-cultural (linguistic) comparison of congenital amusia, a neurogenetic disorder affecting the music (pitch and rhythm) processing of about 5% of the Western population. In the present study, two populations (Hong Kong and Canada) were compared. One spoke a tone language in which differences in voice pitch correspond to differences in word meaning (in Hong Kong Cantonese, /si/ means 'teacher' and 'to try' when spoken in a high and mid pitch pattern, respectively). Using the On-line Identification Test of Congenital Amusia, we found Cantonese speakers as a group tend to show enhanced pitch perception ability compared to speakers of Canadian French and English (non-tone languages). This enhanced ability occurs in the absence of differences in rhythmic perception and persists even after relevant factors such as musical background and age were controlled. Following a common definition of amusia (5% of the population), we found Hong Kong pitch amusics also show enhanced pitch abilities relative to their Canadian counterparts. These findings not only provide critical evidence for a double association of music and speech, but also argue for the reconceptualization of communicative disorders within a cultural framework. Along with recent studies documenting cultural differences in visual perception, our auditory evidence challenges the common assumption of universality of basic mental processes and speaks to the domain generality of culture-to-perception influences.

  19. Effects of Culture on Musical Pitch Perception

    PubMed Central

    Wong, Patrick C. M.; Ciocca, Valter; Chan, Alice H. D.; Ha, Louisa Y. Y.; Tan, Li-Hai; Peretz, Isabelle

    2012-01-01

    The strong association between music and speech has been supported by recent research focusing on musicians' superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association—the influence of linguistic background on music pitch processing and disorders—remains elusive. Because languages differ in their usage of elements (e.g., pitch) that are also essential for music, a unique opportunity for examining such language-to-music associations comes from a cross-cultural (linguistic) comparison of congenital amusia, a neurogenetic disorder affecting the music (pitch and rhythm) processing of about 5% of the Western population. In the present study, two populations (Hong Kong and Canada) were compared. One spoke a tone language in which differences in voice pitch correspond to differences in word meaning (in Hong Kong Cantonese, /si/ means ‘teacher’ and ‘to try’ when spoken in a high and mid pitch pattern, respectively). Using the On-line Identification Test of Congenital Amusia, we found Cantonese speakers as a group tend to show enhanced pitch perception ability compared to speakers of Canadian French and English (non-tone languages). This enhanced ability occurs in the absence of differences in rhythmic perception and persists even after relevant factors such as musical background and age were controlled. Following a common definition of amusia (5% of the population), we found Hong Kong pitch amusics also show enhanced pitch abilities relative to their Canadian counterparts. These findings not only provide critical evidence for a double association of music and speech, but also argue for the reconceptualization of communicative disorders within a cultural framework. Along with recent studies documenting cultural differences in visual perception, our auditory evidence challenges the common assumption of universality of basic mental processes and speaks to the domain generality of culture-to-perception influences. PMID:22509257

  20. Spatiotemporal imaging of cortical activation during verb generation and picture naming.

    PubMed

    Edwards, Erik; Nagarajan, Srikantan S; Dalal, Sarang S; Canolty, Ryan T; Kirsch, Heidi E; Barbaro, Nicholas M; Knight, Robert T

    2010-03-01

    One hundred and fifty years of neurolinguistic research has identified the key structures in the human brain that support language. However, neither the classic neuropsychological approaches introduced by Broca (1861) and Wernicke (1874), nor modern neuroimaging employing PET and fMRI has been able to delineate the temporal flow of language processing in the human brain. We recorded the electrocorticogram (ECoG) from indwelling electrodes over left hemisphere language cortices during two common language tasks, verb generation and picture naming. We observed that the very high frequencies of the ECoG (high-gamma, 70-160 Hz) track language processing with spatial and temporal precision. Serial progression of activations is seen at a larger timescale, showing distinct stages of perception, semantic association/selection, and speech production. Within the areas supporting each of these larger processing stages, parallel (or "incremental") processing is observed. In addition to the traditional posterior vs. anterior localization for speech perception vs. production, we provide novel evidence for the role of premotor cortex in speech perception and of Wernicke's and surrounding cortex in speech production. The data are discussed with regards to current leading models of speech perception and production, and a "dual ventral stream" hybrid of leading speech perception models is given. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  1. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  2. Cross-language comparisons of contextual variation in the production and perception of vowels

    NASA Astrophysics Data System (ADS)

    Strange, Winifred

    2005-04-01

    In the last two decades, a considerable amount of research has investigated second-language (L2) learners problems with perception and production of non-native vowels. Most studies have been conducted using stimuli in which the vowels are produced and presented in simple, citation-form (lists) monosyllabic or disyllabic utterances. In my laboratory, we have investigated the spectral (static/dynamic formant patterns) and temporal (syllable duration) variation in vowel productions as a function of speech-style (list/sentence utterances), speaking rate (normal/rapid), sentence focus (narrow focus/post-focus) and phonetic context (voicing/place of surrounding consonants). Data will be presented for a set of languages that include large and small vowel inventories, stress-, syllable-, and mora-timed prosody, and that vary in the phonological/phonetic function of vowel length, diphthongization, and palatalization. Results show language-specific patterns of contextual variation that affect the cross-language acoustic similarity of vowels. Research on cross-language patterns of perceived phonetic similarity by naive listeners suggests that listener's knowledge of native language (L1) patterns of contextual variation influences their L1/L2 similarity judgments and subsequently, their discrimination of L2 contrasts. Implications of these findings for assessing L2 learners perception of vowels and for developing laboratory training procedures to improve L2 vowel perception will be discussed. [Work supported by NIDCD.

  3. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    ERIC Educational Resources Information Center

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  4. The Role of Experience in the Perception of Phonetic Detail in Children's Speech: A Comparison between Speech-Language Pathologists and Clinically Untrained Listeners

    ERIC Educational Resources Information Center

    Munson, Benjamin; Johnson, Julie M.; Edwards, Jan

    2012-01-01

    Purpose: This study examined whether experienced speech-language pathologists (SLPs) differ from inexperienced people in their perception of phonetic detail in children's speech. Method: Twenty-one experienced SLPs and 21 inexperienced listeners participated in a series of tasks in which they used a visual-analog scale (VAS) to rate children's…

  5. Processing of speech and non-speech stimuli in children with specific language impairment

    NASA Astrophysics Data System (ADS)

    Basu, Madhavi L.; Surprenant, Aimee M.

    2003-10-01

    Specific Language Impairment (SLI) is a developmental language disorder in which children demonstrate varying degrees of difficulties in acquiring a spoken language. One possible underlying cause is that children with SLI have deficits in processing sounds that are of short duration or when they are presented rapidly. Studies so far have compared their performance on speech and nonspeech sounds of unequal complexity. Hence, it is still unclear whether the deficit is specific to the perception of speech sounds or whether it more generally affects the auditory function. The current study aims to answer this question by comparing the performance of children with SLI on speech and nonspeech sounds synthesized from sine-wave stimuli. The children will be tested using the classic categorical perception paradigm that includes both the identification and discrimination of stimuli along a continuum. If there is a deficit in the performance on both speech and nonspeech tasks, it will show that these children have a deficit in processing complex sounds. Poor performance on only the speech sounds will indicate that the deficit is more related to language. The findings will offer insights into the exact nature of the speech perception deficits in children with SLI. [Work supported by ASHF.

  6. Computational Modeling of Emotions and Affect in Social-Cultural Interaction

    DTIC Science & Technology

    2013-10-02

    acoustic and textual information sources. Second, a cross-lingual study was performed that shed light on how human perception and automatic recognition...speech is produced, a speaker’s pitch and intonational pattern, and word usage. Better feature representation and advanced approaches were used to...recognition performance, and improved our understanding of language/cultural impact on human perception of emotion and automatic classification. • Units

  7. Factors influencing speech perception in noise for 5-year-old children using hearing aids or cochlear implants.

    PubMed

    Ching, Teresa Yc; Zhang, Vicky W; Flynn, Christopher; Burns, Lauren; Button, Laura; Hou, Sanna; McGhie, Karen; Van Buynder, Patricia

    2017-07-07

    We investigated the factors influencing speech perception in babble for 5-year-old children with hearing loss who were using hearing aids (HAs) or cochlear implants (CIs). Speech reception thresholds (SRTs) for 50% correct identification were measured in two conditions - speech collocated with babble, and speech with spatially separated babble. The difference in SRTs between the two conditions give a measure of binaural unmasking, commonly known as spatial release from masking (SRM). Multiple linear regression analyses were conducted to examine the influence of a range of demographic factors on outcomes. Participants were 252 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children using HAs or CIs required a better signal-to-noise ratio to achieve the same level of performance as their normal-hearing peers but demonstrated SRM of a similar magnitude. For children using HAs, speech perception was significantly influenced by cognitive and language abilities. For children using CIs, age at CI activation and language ability were significant predictors of speech perception outcomes. Speech perception in children with hearing loss can be enhanced by improving their language abilities. Early age at cochlear implantation was also associated with better outcomes.

  8. Is Language a Factor in the Perception of Foreign Accent Syndrome?

    PubMed

    Jose, Linda; Read, Jennifer; Miller, Nick

    2016-06-01

    Neurogenic foreign accent syndrome (FAS) is diagnosed when listeners perceive speech associated with motor speech impairments as foreign rather than disordered. Speakers with foreign accent syndrome typically have aphasia. It remains unclear how far language changes might contribute to the perception of foreign accent syndrome independent of accent. Judges with and without training in language analysis rated orthographic transcriptions of speech from people with foreign accent syndrome, speech-language disorder and no foreign accent syndrome, foreign accent without neurological impairment and healthy controls on scales of foreignness, normalness and disorderedness. Control speakers were judged as significantly more normal, less disordered and less foreign than other groups. Foreign accent syndrome speakers' transcriptions consistently profiled most closely to those of foreign speakers and significantly different to speakers with speech-language disorder. On normalness and foreignness ratings there were no significant differences between foreign and foreign accent syndrome speakers. For disorderedness, foreign accent syndrome participants fell midway between foreign speakers and those with speech-language impairment only. Slower rate, more hesitations, pauses within and between utterances influenced judgments, delineating control scripts from others. Word-level syntactic and morphological deviations and reduced syntactic and semantic repertoire linked strongly with foreignness perceptions. Greater disordered ratings related to word fragments, poorly intelligible grammatical structures and inappropriate word selection. Language changes influence foreignness perception. Clinical and theoretical issues are addressed.

  9. Common variation in the autism risk gene CNTNAP2, brain structural connectivity and multisensory speech integration.

    PubMed

    Ross, Lars A; Del Bene, Victor A; Molholm, Sophie; Jae Woo, Young; Andrade, Gizely N; Abrahams, Brett S; Foxe, John J

    2017-11-01

    Three lines of evidence motivated this study. 1) CNTNAP2 variation is associated with autism risk and speech-language development. 2) CNTNAP2 variations are associated with differences in white matter (WM) tracts comprising the speech-language circuitry. 3) Children with autism show impairment in multisensory speech perception. Here, we asked whether an autism risk-associated CNTNAP2 single nucleotide polymorphism in neurotypical adults was associated with multisensory speech perception performance, and whether such a genotype-phenotype association was mediated through white matter tract integrity in speech-language circuitry. Risk genotype at rs7794745 was associated with decreased benefit from visual speech and lower fractional anisotropy (FA) in several WM tracts (right precentral gyrus, left anterior corona radiata, right retrolenticular internal capsule). These structural connectivity differences were found to mediate the effect of genotype on audiovisual speech perception, shedding light on possible pathogenic pathways in autism and biological sources of inter-individual variation in audiovisual speech processing in neurotypicals. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. The Dynamic Nature of Speech Perception

    ERIC Educational Resources Information Center

    McQueen, James M.; Norris, Dennis; Cutler, Anne

    2006-01-01

    The speech perception system must be flexible in responding to the variability in speech sounds caused by differences among speakers and by language change over the lifespan of the listener. Indeed, listeners use lexical knowledge to retune perception of novel speech (Norris, McQueen, & Cutler, 2003). In that study, Dutch listeners made…

  11. Self-Esteem in Children with Speech and Language Impairment: An Exploratory Study of Transition from Language Units to Mainstream School

    ERIC Educational Resources Information Center

    Rannard, Anne; Glenn, Sheila

    2009-01-01

    Little is known about the self-perceptions of children moving from language units to mainstream school. This longitudinal exploratory study examined the effects of transition on perceptions of competence and acceptance in one group of children with speech and language impairment. Seven children and their teachers completed the Pictorial Scale of…

  12. Individual Differences in Language Ability Are Related to Variation in Word Recognition, Not Speech Perception: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    McMurray, Bob; Munson, Cheyenne; Tomblin, J. Bruce

    2014-01-01

    Purpose: The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Method: Adolescents with a range of language abilities (N = 74, including…

  13. Language Experience Affects Grouping of Musical Instrument Sounds

    ERIC Educational Resources Information Center

    Bhatara, Anjali; Boll-Avetisyan, Natalie; Agus, Trevor; Höhle, Barbara; Nazzi, Thierry

    2016-01-01

    Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of…

  14. Perception of Tone and Aspiration Contrasts in Chinese Children with Dyslexia

    ERIC Educational Resources Information Center

    Cheung, Him; Chung, Kevin K. H.; Wong, Simpson W. L.; McBride-Chang, Catherine; Penney, Trevor B.; Ho, Connie S. H.

    2009-01-01

    Background: Previous research has shown a relationship between speech perception and dyslexia in alphabetic writing. In these studies speech perception was measured using phonemes, a prominent feature of alphabetic languages. Given the primary importance of lexical tone in Chinese language processing, we tested the extent to which lexical tone and…

  15. The effects of speech production and vocabulary training on different components of spoken language performance.

    PubMed

    Paatsch, Louise E; Blamey, Peter J; Sarant, Julia Z; Bow, Catherine P

    2006-01-01

    A group of 21 hard-of-hearing and deaf children attending primary school were trained by their teachers on the production of selected consonants and on the meanings of selected words. Speech production, vocabulary knowledge, reading aloud, and speech perception measures were obtained before and after each type of training. The speech production training produced a small but significant improvement in the percentage of consonants correctly produced in words. The vocabulary training improved knowledge of word meanings substantially. Performance on speech perception and reading aloud were significantly improved by both types of training. These results were in accord with the predictions of a mathematical model put forward to describe the relationships between speech perception, speech production, and language measures in children (Paatsch, Blamey, Sarant, Martin, & Bow, 2004). These training data demonstrate that the relationships between the measures are causal. In other words, improvements in speech production and vocabulary performance produced by training will carry over into predictable improvements in speech perception and reading scores. Furthermore, the model will help educators identify the most effective methods of improving receptive and expressive spoken language for individual children who are deaf or hard of hearing.

  16. Cross-Modal Recruitment of Auditory and Orofacial Areas During Sign Language in a Deaf Subject.

    PubMed

    Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa

    2017-09-01

    Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. School performance and wellbeing of children with CI in different communicative-educational environments.

    PubMed

    Langereis, Margreet; Vermeulen, Anneke

    2015-06-01

    This study aimed to evaluate the long term effects of CI on auditory, language, educational and social-emotional development of deaf children in different educational-communicative settings. The outcomes of 58 children with profound hearing loss and normal non-verbal cognition, after 60 months of CI use have been analyzed. At testing the children were enrolled in three different educational settings; in mainstream education, where spoken language is used or in hard-of-hearing education where sign supported spoken language is used and in bilingual deaf education, with Sign Language of the Netherlands and Sign Supported Dutch. Children were assessed on auditory speech perception, receptive language, educational attainment and wellbeing. Auditory speech perception of children with CI in mainstream education enable them to acquire language and educational levels that are comparable to those of their normal hearing peers. Although the children in mainstream and hard-of-hearing settings show similar speech perception abilities, language development in children in hard-of-hearing settings lags significantly behind. Speech perception, language and educational attainments of children in deaf education remained extremely poor. Furthermore more children in mainstream and hard-of-hearing environments are resilient than in deaf educational settings. Regression analyses showed an important influence of educational setting. Children with CI who are placed in early intervention environments that facilitate auditory development are able to achieve good auditory speech perception, language and educational levels on the long term. Most parents of these children report no social-emotional concerns. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Cued Speech for Enhancing Speech Perception and First Language Development of Children With Cochlear Implants

    PubMed Central

    Leybaert, Jacqueline; LaSasso, Carol J.

    2010-01-01

    Nearly 300 million people worldwide have moderate to profound hearing loss. Hearing impairment, if not adequately managed, has strong socioeconomic and affective impact on individuals. Cochlear implants have become the most effective vehicle for helping profoundly deaf children and adults to understand spoken language, to be sensitive to environmental sounds, and, to some extent, to listen to music. The auditory information delivered by the cochlear implant remains non-optimal for speech perception because it delivers a spectrally degraded signal and lacks some of the fine temporal acoustic structure. In this article, we discuss research revealing the multimodal nature of speech perception in normally-hearing individuals, with important inter-subject variability in the weighting of auditory or visual information. We also discuss how audio-visual training, via Cued Speech, can improve speech perception in cochlear implantees, particularly in noisy contexts. Cued Speech is a system that makes use of visual information from speechreading combined with hand shapes positioned in different places around the face in order to deliver completely unambiguous information about the syllables and the phonemes of spoken language. We support our view that exposure to Cued Speech before or after the implantation could be important in the aural rehabilitation process of cochlear implantees. We describe five lines of research that are converging to support the view that Cued Speech can enhance speech perception in individuals with cochlear implants. PMID:20724357

  19. Perception of Melodic Contour and Intonation in Autism Spectrum Disorder: Evidence From Mandarin Speakers.

    PubMed

    Jiang, Jun; Liu, Fang; Wan, Xuan; Jiang, Cunmei

    2015-07-01

    Tone language experience benefits pitch processing in music and speech for typically developing individuals. No known studies have examined pitch processing in individuals with autism who speak a tone language. This study investigated discrimination and identification of melodic contour and speech intonation in a group of Mandarin-speaking individuals with high-functioning autism. Individuals with autism showed superior melodic contour identification but comparable contour discrimination relative to controls. In contrast, these individuals performed worse than controls on both discrimination and identification of speech intonation. These findings provide the first evidence for differential pitch processing in music and speech in tone language speakers with autism, suggesting that tone language experience may not compensate for speech intonation perception deficits in individuals with autism.

  20. The Emergence of the Allophonic Perception of Unfamiliar Speech Sounds: The Effects of Contextual Distribution and Phonetic Naturalness

    ERIC Educational Resources Information Center

    Noguchi, Masaki; Hudson Kam, Carla L.

    2018-01-01

    In human languages, different speech sounds can be contextual variants of a single phoneme, called allophones. Learning which sounds are allophones is an integral part of the acquisition of phonemes. Whether given sounds are separate phonemes or allophones in a listener's language affects speech perception. Listeners tend to be less sensitive to…

  1. Speech Perception and Phonological Short-Term Memory Capacity in Language Impairment: Preliminary Evidence from Adolescents with Specific Language Impairment (SLI) and Autism Spectrum Disorders (ASD)

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick Greatorex; Charman, Tony; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Baird, Gillian

    2010-01-01

    Background: The cognitive bases of language impairment in specific language impairment (SLI) and autism spectrum disorders (ASD) were investigated in a novel non-word comparison task which manipulated phonological short-term memory (PSTM) and speech perception, both implicated in poor non-word repetition. Aims: This study aimed to investigate the…

  2. International Collegium of Rehabilitative Audiology (ICRA) recommendations for the construction of multilingual speech tests. ICRA Working Group on Multilingual Speech Tests.

    PubMed

    Akeroyd, Michael A; Arlinger, Stig; Bentler, Ruth A; Boothroyd, Arthur; Dillier, Norbert; Dreschler, Wouter A; Gagné, Jean-Pierre; Lutman, Mark; Wouters, Jan; Wong, Lena; Kollmeier, Birger

    2015-01-01

    To provide guidelines for the development of two types of closed-set speech-perception tests that can be applied and interpreted in the same way across languages. The guidelines cover the digit triplet and the matrix sentence tests that are most commonly used to test speech recognition in noise. They were developed by a working group on Multilingual Speech Tests of the International Collegium of Rehabilitative Audiology (ICRA). The recommendations are based on reviews of existing evaluations of the digit triplet and matrix tests as well as on the research experience of members of the ICRA Working Group. They represent the results of a consensus process. The resulting recommendations deal with: Test design and word selection; Talker characteristics; Audio recording and stimulus preparation; Masking noise; Test administration; and Test validation. By following these guidelines for the development of any new test of this kind, clinicians and researchers working in any language will be able to perform tests whose results can be compared and combined in cross-language studies.

  3. Speech monitoring and phonologically-mediated eye gaze in language perception and production: a comparison using printed word eye-tracking

    PubMed Central

    Gauvin, Hanna S.; Hartsuiker, Robert J.; Huettig, Falk

    2013-01-01

    The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception. PMID:24339809

  4. Speech Perception Deficits in Mandarin-Speaking School-Aged Children with Poor Reading Comprehension

    PubMed Central

    Liu, Huei-Mei; Tsao, Feng-Ming

    2017-01-01

    Previous studies have shown that children learning alphabetic writing systems who have language impairment or dyslexia exhibit speech perception deficits. However, whether such deficits exist in children learning logographic writing systems who have poor reading comprehension remains uncertain. To further explore this issue, the present study examined speech perception deficits in Mandarin-speaking children with poor reading comprehension. Two self-designed tasks, consonant categorical perception task and lexical tone discrimination task were used to compare speech perception performance in children (n = 31, age range = 7;4–10;2) with poor reading comprehension and an age-matched typically developing group (n = 31, age range = 7;7–9;10). Results showed that the children with poor reading comprehension were less accurate in consonant and lexical tone discrimination tasks and perceived speech contrasts less categorically than the matched group. The correlations between speech perception skills (i.e., consonant and lexical tone discrimination sensitivities and slope of consonant identification curve) and individuals’ oral language and reading comprehension were stronger than the correlations between speech perception ability and word recognition ability. In conclusion, the results revealed that Mandarin-speaking children with poor reading comprehension exhibit less-categorized speech perception, suggesting that imprecise speech perception, especially lexical tone perception, is essential to account for reading learning difficulties in Mandarin-speaking children. PMID:29312031

  5. Electrophysiological study of the basal temporal language area: a convergence zone between language perception and production networks.

    PubMed

    Trébuchon-Da Fonseca, Agnès; Bénar, Christian-G; Bartoloméi, Fabrice; Régis, Jean; Démonet, Jean-François; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2009-03-01

    Regions involved in language processing have been observed in the inferior part of the left temporal lobe. Although collectively labelled 'the Basal Temporal Language Area' (BTLA), these territories are functionally heterogeneous and are involved in language perception (i.e. reading or semantic task) or language production (speech arrest after stimulation). The objective of this study was to clarify the role of BTLA in the language network in an epileptic patient who displayed jargonaphasia. Intracerebral evoked related potentials to verbal and non-verbal stimuli in auditory and visual modalities were recorded from BTLA. Time-frequency analysis was performed during ictal events. Evoked potentials and induced gamma-band activity provided direct evidence that BTLA is sensitive to language stimuli in both modalities, 350 ms after stimulation. In addition, spontaneous gamma-band discharges were recorded from this region during which we observed phonological jargon. The findings emphasize the multimodal nature of this region in speech perception. In the context of transient dysfunction, the patient's lexical semantic processing network is disrupted, reducing spoken output to meaningless phoneme combinations. This rare opportunity to study the BTLA "in vivo" demonstrates its pivotal role in lexico-semantic processing for speech production and its multimodal nature in speech perception.

  6. Neural Entrainment to Rhythmically Presented Auditory, Visual, and Audio-Visual Speech in Children

    PubMed Central

    Power, Alan James; Mead, Natasha; Barnes, Lisa; Goswami, Usha

    2012-01-01

    Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal “samples” of information from the speech stream at different rates, phase resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (“phase locking”). Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate) based on repetition of the syllable “ba,” presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a “talking head”). To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the “ba” stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a “ba” in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling, such as dyslexia. PMID:22833726

  7. Processing melodic contour and speech intonation in congenital amusics with Mandarin Chinese.

    PubMed

    Jiang, Cunmei; Hamm, Jeff P; Lim, Vanessa K; Kirk, Ian J; Yang, Yufang

    2010-07-01

    Congenital amusia is a disorder in the perception and production of musical pitch. It has been suggested that early exposure to a tonal language may compensate for the pitch disorder (Peretz, 2008). If so, it is reasonable to expect that there would be different characterizations of pitch perception in music and speech in congenital amusics who speak a tonal language, such as Mandarin. In this study, a group of 11 adults with amusia whose first language was Mandarin were tested with melodic contour and speech intonation discrimination and identification tasks. The participants with amusia were impaired in discriminating and identifying melodic contour. These abnormalities were also detected in identifying both speech and non-linguistic analogue derived patterns for the Mandarin intonation tasks. In addition, there was an overall trend for the participants with amusia to show deficits with respect to controls in the intonation discrimination tasks for both speech and non-linguistic analogues. These findings suggest that the amusics' melodic pitch deficits may extend to the perception of speech, and could potentially result in some language deficits in those who speak a tonal language. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  8. Relationship between individual differences in speech processing and cognitive functions.

    PubMed

    Ou, Jinghua; Law, Sam-Po; Fung, Roxana

    2015-12-01

    A growing body of research has suggested that cognitive abilities may play a role in individual differences in speech processing. The present study took advantage of a widespread linguistic phenomenon of sound change to systematically assess the relationships between speech processing and various components of attention and working memory in the auditory and visual modalities among typically developed Cantonese-speaking individuals. The individual variations in speech processing are captured in an ongoing sound change-tone merging in Hong Kong Cantonese, in which typically developed native speakers are reported to lose the distinctions between some tonal contrasts in perception and/or production. Three groups of participants were recruited, with a first group of good perception and production, a second group of good perception but poor production, and a third group of good production but poor perception. Our findings revealed that modality-independent abilities of attentional switching/control and working memory might contribute to individual differences in patterns of speech perception and production as well as discrimination latencies among typically developed speakers. The findings not only have the potential to generalize to speech processing in other languages, but also broaden our understanding of the omnipresent phenomenon of language change in all languages.

  9. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    PubMed

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound disorders. Non-speech oral motor exercise use was most frequently reported in the treatment of dysarthria. Non-speech oral motor exercise use when targeting speech sound disorders is not widely endorsed in the literature.

  10. Laterality and unilateral deafness: Patients with congenital right ear deafness do not develop atypical language dominance.

    PubMed

    Van der Haegen, Lise; Acke, Frederic; Vingerhoets, Guy; Dhooge, Ingeborg; De Leenheer, Els; Cai, Qing; Brysbaert, Marc

    2016-12-01

    Auditory speech perception, speech production and reading lateralize to the left hemisphere in the majority of healthy right-handers. In this study, we investigated to what extent sensory input underlies the side of language dominance. We measured the lateralization of the three core subprocesses of language in patients who had profound hearing loss in the right ear from birth and in matched control subjects. They took part in a semantic decision listening task involving speech and sound stimuli (auditory perception), a word generation task (speech production) and a passive reading task (reading). The results show that a lack of sensory auditory input on the right side, which is strongly connected to the contralateral left hemisphere, does not lead to atypical lateralization of speech perception. Speech production and reading were also typically left lateralized in all but one patient, contradicting previous small scale studies. Other factors such as genetic constraints presumably overrule the role of sensory input in the development of (a)typical language lateralization. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The Role of Native-Language Knowledge in the Perception of Casual Speech in a Second Language

    PubMed Central

    Mitterer, Holger; Tuinman, Annelie

    2012-01-01

    Casual speech processes, such as /t/-reduction, make word recognition harder. Additionally, word recognition is also harder in a second language (L2). Combining these challenges, we investigated whether L2 learners have recourse to knowledge from their native language (L1) when dealing with casual speech processes in their L2. In three experiments, production and perception of /t/-reduction was investigated. An initial production experiment showed that /t/-reduction occurred in both languages and patterned similarly in proper nouns but differed when /t/ was a verbal inflection. Two perception experiments compared the performance of German learners of Dutch with that of native speakers for nouns and verbs. Mirroring the production patterns, German learners’ performance strongly resembled that of native Dutch listeners when the reduced /t/ was part of a word stem, but deviated where /t/ was a verbal inflection. These results suggest that a casual speech process in a second language is problematic for learners when the process is not known from the leaner’s native language, similar to what has been observed for phoneme contrasts. PMID:22811675

  12. Effects of Different Types of Corrective Feedback on Receptive Skills in a Second Language: A Speech Perception Training Study

    ERIC Educational Resources Information Center

    Lee, Andrew H.; Lyster, Roy

    2016-01-01

    This study investigated the effects of different types of corrective feedback (CF) provided during second language (L2) speech perception training. One hundred Korean learners of L2 English, randomly assigned to five groups (n = 20 per group), participated in eight computer-assisted perception training sessions targeting two minimal pairs of…

  13. Stakeholders' Qualitative Perspectives of Effective Telepractice Pedagogy in Speech-Language Pathology

    ERIC Educational Resources Information Center

    Overby, Megan S.

    2018-01-01

    Background: Academic programmes in speech-language pathology are increasingly providing telehealth/telepractice clinical education to students. Despite this growth, there is little information describing effective ways to teach it. Aims: The current exploratory study analyzed the perceptions of speech-language pathology/therapy (SLP/SLT) faculty,…

  14. The McGurk effect in children with autism and Asperger syndrome.

    PubMed

    Bebko, James M; Schroeder, Jessica H; Weiss, Jonathan A

    2014-02-01

    Children with autism may have difficulties in audiovisual speech perception, which has been linked to speech perception and language development. However, little has been done to examine children with Asperger syndrome as a group on tasks assessing audiovisual speech perception, despite this group's often greater language skills. Samples of children with autism, Asperger syndrome, and Down syndrome, as well as a typically developing sample, were presented with an auditory-only condition, a speech-reading condition, and an audiovisual condition designed to elicit the McGurk effect. Children with autism demonstrated unimodal performance at the same level as the other groups, yet showed a lower rate of the McGurk effect compared with the Asperger, Down and typical samples. These results suggest that children with autism may have unique intermodal speech perception difficulties linked to their representations of speech sounds. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  15. Visual and Auditory Input in Second-Language Speech Processing

    ERIC Educational Resources Information Center

    Hardison, Debra M.

    2010-01-01

    The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…

  16. The impact of phonetic dissimilarity on the perception of foreign accented speech

    NASA Astrophysics Data System (ADS)

    Weil, Shawn A.

    2003-10-01

    Non-normative speech (i.e., synthetic speech, pathological speech, foreign accented speech) is more difficult to process for native listeners than is normative speech. Does perceptual dissimilarity affect only intelligibility, or are there other costs to processing? The current series of experiments investigates both the intelligibility and time course of foreign accented speech (FAS) perception. Native English listeners heard single English words spoken by both native English speakers and non-native speakers (Mandarin or Russian). Words were chosen based on the similarity between the phonetic inventories of the respective languages. Three experimental designs were used: a cross-modal matching task, a word repetition (shadowing) task, and two subjective ratings tasks which measured impressions of accentedness and effortfulness. The results replicate previous investigations that have found that FAS significantly lowers word intelligibility. Furthermore, in FAS as well as perceptual effort, in the word repetition task, correct responses are slower to accented words than to nonaccented words. An analysis indicates that both intelligibility and reaction time are, in part, functions of the similarity between the talker's utterance and the listener's representation of the word.

  17. Impact of Language on Development of Auditory-Visual Speech Perception

    ERIC Educational Resources Information Center

    Sekiyama, Kaoru; Burnham, Denis

    2008-01-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…

  18. Speech-Language Pathologists' and Teachers' Perceptions of Classroom-Based Interventions.

    ERIC Educational Resources Information Center

    Beck, Ann R.; Dennis, Marcia

    1997-01-01

    Speech-language pathologists (N=21) and teachers (N=54) were surveyed regarding their perceptions of classroom-based interventions. The two groups agreed about the primary advantages and disadvantages of most interventions, the primary areas of difference being classroom management and ease of data collection. Other findings indicated few…

  19. Language Awareness and Perception of Connected Speech in a Second Language

    ERIC Educational Resources Information Center

    Kennedy, Sara; Blanchet, Josée

    2014-01-01

    To be effective second or additional language (L2) listeners, learners should be aware of typical processes in connected L2 speech (e.g. linking). This longitudinal study explored how learners' developing ability to perceive connected L2 speech was related to the quality of their language awareness. Thirty-two learners of L2 French at a university…

  20. Speech Perception in the Classroom.

    ERIC Educational Resources Information Center

    Smaldino, Joseph J.; Crandell, Carl C.

    1999-01-01

    This article discusses how poor room acoustics can make speech inaudible and presents a speech-perception model demonstrating the linkage between adequacy of classroom acoustics and the development of a speech and language systems. It argues both aspects must be considered when evaluating barriers to listening and learning in a classroom.…

  1. Perception of speech rhythm in second language: the case of rhythmically similar L1 and L2

    PubMed Central

    Ordin, Mikhail; Polyanskaya, Leona

    2015-01-01

    We investigated the perception of developmental changes in timing patterns that happen in the course of second language (L2) acquisition, provided that the native and the target languages of the learner are rhythmically similar (German and English). It was found that speech rhythm in L2 English produced by German learners becomes increasingly stress-timed as acquisition progresses. This development is captured by the tempo-normalized rhythm measures of durational variability. Advanced learners also deliver speech at a faster rate. However, when native speakers have to classify the timing patterns characteristic of L2 English of German learners at different proficiency levels, they attend to speech rate cues and ignore the differences in speech rhythm. PMID:25859228

  2. Speech Recognition Software for Language Learning: Toward an Evaluation of Validity and Student Perceptions

    ERIC Educational Resources Information Center

    Cordier, Deborah

    2009-01-01

    A renewed focus on foreign language (FL) learning and speech for communication has resulted in computer-assisted language learning (CALL) software developed with Automatic Speech Recognition (ASR). ASR features for FL pronunciation (Lafford, 2004) are functional components of CALL designs used for FL teaching and learning. The ASR features…

  3. Cross-cultural differences in beliefs and practices that affect the language spoken to children: mothers with Indian and Western heritage.

    PubMed

    Simmons, Noreen; Johnston, Judith

    2007-01-01

    Speech-language pathologists often advise families about interaction patterns that will facilitate language learning. This advice is typically based on research with North American families of European heritage and may not be culturally suited for non-Western families. The goal of the project was to identify differences in the beliefs and practices of Indian and Euro-Canadian mothers that would affect patterns of talk to children. A total of 47 Indian mothers and 51 Euro-Canadian mothers of preschool age children completed a written survey concerning child-rearing practices and beliefs, especially those about talk to children. Discriminant analyses indicated clear cross-cultural differences and produced functions that could predict group membership with a 96% accuracy rate. Items contributing most to these functions concerned the importance of family, perceptions of language learning, children's use of language in family and society, and interactions surrounding text. Speech-language pathologists who wish to adapt their services for families of Indian heritage should remember the centrality of the family, the likelihood that there will be less emphasis on early independence and achievement, and the preference for direct instruction.

  4. An Exploration of Rhythmic Grouping of Speech Sequences by French- and German-Learning Infants

    PubMed Central

    Abboub, Nawal; Boll-Avetisyan, Natalie; Bhatara, Anjali; Höhle, Barbara; Nazzi, Thierry

    2016-01-01

    Rhythm in music and speech can be characterized by a constellation of several acoustic cues. Individually, these cues have different effects on rhythmic perception: sequences of sounds alternating in duration are perceived as short-long pairs (weak-strong/iambic pattern), whereas sequences of sounds alternating in intensity or pitch are perceived as loud-soft, or high-low pairs (strong-weak/trochaic pattern). This perceptual bias—called the Iambic-Trochaic Law (ITL)–has been claimed to be an universal property of the auditory system applying in both the music and the language domains. Recent studies have shown that language experience can modulate the effects of the ITL on rhythmic perception of both speech and non-speech sequences in adults, and of non-speech sequences in 7.5-month-old infants. The goal of the present study was to explore whether language experience also modulates infants’ grouping of speech. To do so, we presented sequences of syllables to monolingual French- and German-learning 7.5-month-olds. Using the Headturn Preference Procedure (HPP), we examined whether they were able to perceive a rhythmic structure in sequences of syllables that alternated in duration, pitch, or intensity. Our findings show that both French- and German-learning infants perceived a rhythmic structure when it was cued by duration or pitch but not intensity. Our findings also show differences in how these infants use duration and pitch cues to group syllable sequences, suggesting that pitch cues were the easier ones to use. Moreover, performance did not differ across languages, failing to reveal early language effects on rhythmic perception. These results contribute to our understanding of the origin of rhythmic perception and perceptual mechanisms shared across music and speech, which may bootstrap language acquisition. PMID:27378887

  5. Bullying in Children Who Stutter: Speech-Language Pathologists' Perceptions and Intervention Strategies

    ERIC Educational Resources Information Center

    Blood, Gordon W.; Boyle, Michael P.; Blood, Ingrid M.; Nalesnik, Gina R.

    2010-01-01

    Bullying in school-age children is a global epidemic. School personnel play a critical role in eliminating this problem. The goals of this study were to examine speech-language pathologists' (SLPs) perceptions of bullying, endorsement of potential strategies for dealing with bullying, and associations among SLPs' responses and specific demographic…

  6. General Auditory Processing, Speech Perception and Phonological Awareness Skills in Chinese-English Biliteracy

    ERIC Educational Resources Information Center

    Chung, Kevin K. H.; McBride-Chang, Catherine; Cheung, Him; Wong, Simpson W. L.

    2013-01-01

    This study focused on the associations of general auditory processing, speech perception, phonological awareness and word reading in Cantonese-speaking children from Hong Kong learning to read both Chinese (first language [L1]) and English (second language [L2]). Children in Grades 2--4 ("N" = 133) participated and were administered…

  7. Mexican Immigrant Mothers' Perceptions of Their Children's Communication Disabilities, Emergent Literacy Development, and Speech-Language Therapy Program

    ERIC Educational Resources Information Center

    Kummerer, Sharon E.; Lopez-Reyna, Norma A.; Hughes, Marie Tejero

    2007-01-01

    Purpose: This qualitative study explored mothers' perceptions of their children's communication disabilities, emergent literacy development, and speech-language therapy programs. Method: Participants were 14 Mexican immigrant mothers and their children (age 17-47 months) who were receiving center-based services from an early childhood intervention…

  8. Written Language Disorders: Speech-Language Pathologists' Training, Knowledge, and Confidence

    ERIC Educational Resources Information Center

    Blood, Gordon W.; Mamett, Callie; Gordon, Rebecca; Blood, Ingrid M.

    2010-01-01

    Purpose: This study examined speech-language pathologists' (SLPs') perceptions of their (a) educational and clinical training in evaluating and treating written language disorders, (b) knowledge bases in this area, (c) sources of knowledge about written language disorders, (d) confidence levels, and (e) predictors of confidence in working with…

  9. Cross-language categorization of French and German vowels by naive American listeners.

    PubMed

    Strange, Winifred; Levy, Erika S; Law, Franzo F

    2009-09-01

    American English (AE) speakers' perceptual assimilation of 14 North German (NG) and 9 Parisian French (PF) vowels was examined in two studies using citation-form disyllables (study 1) and sentences with vowels surrounded by labial and alveolar consonants in multisyllabic nonsense words (study 2). Listeners categorized multiple tokens of each NG and PF vowel as most similar to selected AE vowels and rated their category "goodness" on a nine-point Likert scale. Front, rounded vowels were assimilated primarily to back AE vowels, despite their acoustic similarity to front AE vowels. In study 1, they were considered poorer exemplars of AE vowels than were NG and PF back, rounded vowels; in study 2, front and back, rounded vowels were perceived as similar to each other. Assimilation of some front, unrounded and back, rounded NG and PF vowels varied with language, speaking style, and consonantal context. Differences in perceived similarity often could not be predicted from context-specific cross-language spectral similarities. Results suggest that listeners can access context-specific, phonetic details when listening to citation-form materials, but assimilate non-native vowels on the basis of context-independent phonological equivalence categories when processing continuous speech. Results are interpreted within the Automatic Selective Perception model of speech perception.

  10. Cross-language categorization of French and German vowels by naïve American listeners

    PubMed Central

    Strange, Winifred; Levy, Erika S.; Law, Franzo F.

    2009-01-01

    American English (AE) speakers’ perceptual assimilation of 14 North German (NG) and 9 Parisian French (PF) vowels was examined in two studies using citation-form disyllables (study 1) and sentences with vowels surrounded by labial and alveolar consonants in multisyllabic nonsense words (study 2). Listeners categorized multiple tokens of each NG and PF vowel as most similar to selected AE vowels and rated their category “goodness” on a nine-point Likert scale. Front, rounded vowels were assimilated primarily to back AE vowels, despite their acoustic similarity to front AE vowels. In study 1, they were considered poorer exemplars of AE vowels than were NG and PF back, rounded vowels; in study 2, front and back, rounded vowels were perceived as similar to each other. Assimilation of some front, unrounded and back, rounded NG and PF vowels varied with language, speaking style, and consonantal context. Differences in perceived similarity often could not be predicted from context-specific cross-language spectral similarities. Results suggest that listeners can access context-specific, phonetic details when listening to citation-form materials, but assimilate non-native vowels on the basis of context-independent phonological equivalence categories when processing continuous speech. Results are interpreted within the Automatic Selective Perception model of speech perception. PMID:19739759

  11. Segmental and Suprasegmental Perception in Children Using Hearing Aids.

    PubMed

    Wenrich, Kaitlyn A; Davidson, Lisa S; Uchanski, Rosalie M

    Suprasegmental perception (perception of stress, intonation, "how something is said" and "who says it") and segmental speech perception (perception of individual phonemes or perception of "what is said") are perceptual abilities that provide the foundation for the development of spoken language and effective communication. While there are numerous studies examining segmental perception in children with hearing aids (HAs), there are far fewer studies examining suprasegmental perception, especially for children with greater degrees of residual hearing. Examining the relation between acoustic hearing thresholds, and both segmental and suprasegmental perception for children with HAs, may ultimately enable better device recommendations (bilateral HAs, bimodal devices [one CI and one HA in opposite ears], bilateral CIs) for a particular degree of residual hearing. Examining both types of speech perception is important because segmental and suprasegmental cues are affected differentially by the type of hearing device(s) used (i.e., cochlear implant [CI] and/or HA). Additionally, suprathreshold measures, such as frequency resolution ability, may partially predict benefit from amplification and may assist audiologists in making hearing device recommendations. The purpose of this study is to explore the relationship between audibility (via hearing thresholds and speech intelligibility indices), and segmental and suprasegmental speech perception for children with HAs. A secondary goal is to explore the relationships among frequency resolution ability (via spectral modulation detection [SMD] measures), segmental and suprasegmental speech perception, and receptive language in these same children. A prospective cross-sectional design. Twenty-three children, ages 4 yr 11 mo to 11 yr 11 mo, participated in the study. Participants were recruited from pediatric clinic populations, oral schools for the deaf, and mainstream schools. Audiological history and hearing device information were collected from participants and their families. Segmental and suprasegmental speech perception, SMD, and receptive vocabulary skills were assessed. Correlations were calculated to examine the significance (p < 0.05) of relations between audibility and outcome measures. Measures of audibility and segmental speech perception are not significantly correlated, while low-frequency pure-tone average (unaided) is significantly correlated with suprasegmental speech perception. SMD is significantly correlated with all measures (measures of audibility, segmental and suprasegmental perception and vocabulary). Lastly, although age is not significantly correlated with measures of audibility, it is significantly correlated with all other outcome measures. The absence of a significant correlation between audibility and segmental speech perception might be attributed to overall audibility being maximized through well-fit HAs. The significant correlation between low-frequency unaided audibility and suprasegmental measures is likely due to the strong, predominantly low-frequency nature of suprasegmental acoustic properties. Frequency resolution ability, via SMD performance, is significantly correlated with all outcomes and requires further investigation; its significant correlation with vocabulary suggests that linguistic ability may be partially related to frequency resolution ability. Last, all of the outcome measures are significantly correlated with age, suggestive of developmental effects. American Academy of Audiology

  12. New graduates’ perceptions of preparedness to provide speech-language therapy services in general and dysphagia services in particular

    PubMed Central

    Booth, Alannah; Choto, Fadziso; Gotlieb, Jessica; Robertson, Rebecca; Morris, Gabriella; Stockley, Nicola; Mauff, Katya

    2015-01-01

    Background Upon graduation, newly qualified speech-language therapists are expected to provide services independently. This study describes new graduates’ perceptions of their preparedness to provide services across the scope of the profession and explores associations between perceptions of dysphagia theory and clinical learning curricula with preparedness for adult and paediatric dysphagia service delivery. Methods New graduates of six South African universities were recruited to participate in a survey by completing an electronic questionnaire exploring their perceptions of the dysphagia curricula and their preparedness to practise across the scope of the profession of speech-language therapy. Results Eighty graduates participated in the study yielding a response rate of 63.49%. Participants perceived themselves to be well prepared in some areas (e.g. child language: 100%; articulation and phonology: 97.26%), but less prepared in other areas (e.g. adult dysphagia: 50.70%; paediatric dysarthria: 46.58%; paediatric dysphagia: 38.36%) and most unprepared to provide services requiring sign language (23.61%) and African languages (20.55%). There was a significant relationship between perceptions of adequate theory and clinical learning opportunities with assessment and management of dysphagia and perceptions of preparedness to provide dysphagia services. Conclusion There is a need for review of existing curricula and consideration of developing a standard speech-language therapy curriculum across universities, particularly in service provision to a multilingual population, and in both the theory and clinical learning of the assessment and management of adult and paediatric dysphagia, to better equip graduates for practice. PMID:26304217

  13. Speech perception and phonological short-term memory capacity in language impairment: preliminary evidence from adolescents with specific language impairment (SLI) and autism spectrum disorders (ASD).

    PubMed

    Loucas, Tom; Riches, Nick Greatorex; Charman, Tony; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Baird, Gillian

    2010-01-01

    The cognitive bases of language impairment in specific language impairment (SLI) and autism spectrum disorders (ASD) were investigated in a novel non-word comparison task which manipulated phonological short-term memory (PSTM) and speech perception, both implicated in poor non-word repetition. This study aimed to investigate the contributions of PSTM and speech perception in non-word processing and whether individuals with SLI and ASD plus language impairment (ALI) show similar or different patterns of deficit in these cognitive processes. Three groups of adolescents (aged 14-17 years), 14 with SLI, 16 with ALI, and 17 age and non-verbal IQ matched typically developing (TD) controls, made speeded discriminations between non-word pairs. Stimuli varied in PSTM load (two- or four-syllables) and speech perception load (mismatches on a word-initial or word-medial segment). Reaction times showed effects of both non-word length and mismatch position and these factors interacted: four-syllable and word-initial mismatch stimuli resulted in the slowest decisions. Individuals with language impairment showed the same pattern of performance as those with typical development in the reaction time data. A marginal interaction between group and item length was driven by the SLI and ALI groups being less accurate with long items than short ones, a difference not found in the TD group. Non-word discrimination suggests that there are similarities and differences between adolescents with SLI and ALI and their TD peers. Reaction times appear to be affected by increasing PSTM and speech perception loads in a similar way. However, there was some, albeit weaker, evidence that adolescents with SLI and ALI are less accurate than TD individuals, with both showing an effect of PSTM load. This may indicate, at some level, the processing substrate supporting both PSTM and speech perception is intact in adolescents with SLI and ALI, but also in both there may be impaired access to PSTM resources.

  14. Cross-Cultural Adaptation of the Intelligibility in Context Scale for South Africa

    ERIC Educational Resources Information Center

    Pascoe, Michelle; McLeod, Sharynne

    2016-01-01

    The Intelligibility in Context Scale (ICS) is a screening questionnaire that focuses on parents' perceptions of children's speech in different contexts. Originally developed in English, it has been translated into 60 languages and the validity and clinical utility of the scale has been documented in a range of countries. In South Africa, there are…

  15. Speech Research

    NASA Astrophysics Data System (ADS)

    Several articles addressing topics in speech research are presented. The topics include: exploring the functional significance of physiological tremor: A biospectroscopic approach; differences between experienced and inexperienced listeners to deaf speech; a language-oriented view of reading and its disabilities; Phonetic factors in letter detection; categorical perception; Short-term recall by deaf signers of American sign language; a common basis for auditory sensory storage in perception and immediate memory; phonological awareness and verbal short-term memory; initiation versus execution time during manual and oral counting by stutterers; trading relations in the perception of speech by five-year-old children; the role of the strap muscles in pitch lowering; phonetic validation of distinctive features; consonants and syllable boundaires; and vowel information in postvocalic frictions.

  16. Noise on, voicing off: Speech perception deficits in children with specific language impairment.

    PubMed

    Ziegler, Johannes C; Pech-Georgel, Catherine; George, Florence; Lorenzi, Christian

    2011-11-01

    Speech perception of four phonetic categories (voicing, place, manner, and nasality) was investigated in children with specific language impairment (SLI) (n=20) and age-matched controls (n=19) in quiet and various noise conditions using an AXB two-alternative forced-choice paradigm. Children with SLI exhibited robust speech perception deficits in silence, stationary noise, and amplitude-modulated noise. Comparable deficits were obtained for fast, intermediate, and slow modulation rates, and this speaks against the various temporal processing accounts of SLI. Children with SLI exhibited normal "masking release" effects (i.e., better performance in fluctuating noise than in stationary noise), again suggesting relatively spared spectral and temporal auditory resolution. In terms of phonetic categories, voicing was more affected than place, manner, or nasality. The specific nature of this voicing deficit is hard to explain with general processing impairments in attention or memory. Finally, speech perception in noise correlated with an oral language component but not with either a memory or IQ component, and it accounted for unique variance beyond IQ and low-level auditory perception. In sum, poor speech perception seems to be one of the primary deficits in children with SLI that might explain poor phonological development, impaired word production, and poor word comprehension. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. The neural processing of masked speech

    PubMed Central

    Scott, Sophie K; McGettigan, Carolyn

    2014-01-01

    Spoken language is rarely heard in silence, and a great deal of interest in psychoacoustics has focused on the ways that the perception of speech is affected by properties of masking noise. In this review we first briefly outline the neuroanatomy of speech perception. We then summarise the neurobiological aspects of the perception of masked speech, and investigate this as a function of masker type, masker level and task. PMID:23685149

  18. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    PubMed

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  19. Tuning in and tuning out: Speech perception in native- and foreign-talker babble

    NASA Astrophysics Data System (ADS)

    van Heukelem, Kristin; Bradlow, Ann R.

    2005-09-01

    Studies on speech perception in multitalker babble have revealed asymmetries in the effects of noise on native versus foreign-accented speech intelligibility for native listeners [Rogers et al., Lang Speech 47(2), 139-154 (2004)] and on sentence-in-noise perception by native versus non-native listeners [Mayo et al., J. Speech Lang. Hear. Res., 40, 686-693 (1997)], suggesting that the linguistic backgrounds of talkers and listeners contribute to the effects of noise on speech perception. However, little attention has been paid to the language of the babble. This study tested whether the language of the noise also has asymmetrical effects on listeners. Replicating previous findings [e.g., Bronkhorst and Plomp, J. Acoust. Soc. Am., 92, 3132-3139 (1992)], the results showed poorer English sentence recognition by native English listeners in six-talker babble than in two-talker babble regardless of the language of the babble, demonstrating the effect of increased psychoacoustic/energetic masking. In addition, the results showed that in the two-talker babble condition, native English listeners were more adversely affected by English than Chinese babble. These findings demonstrate informational/cognitive masking on sentence-in-noise recognition in the form of linguistic competition. Whether this competition is at the lexical or sublexical level and whether it is modulated by the phonetic similarity between the target and noise languages remains to be determined.

  20. Long-Term Speech and Language Outcomes in Prelingually Deaf Children, Adolescents and Young Adults Who Received Cochlear Implants in Childhood

    PubMed Central

    Ruffin, Chad V.; Kronenberger, William G.; Colson, Bethany G.; Henning, Shirley C.; Pisoni, David B.

    2013-01-01

    This study investigated long-term speech and language outcomes in 51 prelingually deaf children, adolescents, and young adults who received cochlear implants (CIs) prior to 7 years of age and used their implants for at least 7 years. Average speech perception scores were similar to those found in prior research with other samples of experienced CI users. Mean language test scores were lower than norm-referenced scores from nationally representative normal-hearing, typically-developing samples, although a majority of the CI users scored within one standard deviation of the normative mean or higher on the Peabody Picture Vocabulary Test, Fourth Edition (63%) and Clinical Evaluation of Language Fundamentals, Fourth Edition (69%). Speech perception scores were negatively associated with a meningitic etiology of hearing loss, older age at implantation, poorer pre-implant unaided pure tone average thresholds, lower family income, and the use of Total Communication. Users of CIs for 15 years or more were more likely to have these characteristics and were more likely to score lower on measures of speech perception compared to users of CIs for 14 years or less. The aggregation of these risk factors in the > 15 years of CI use subgroup accounts for their lower speech perception scores and may stem from more conservative CI candidacy criteria in use at the beginning of pediatric cochlear implantation. PMID:23988907

  1. Some cross-linguistic evidence for modulation of implicational universals by language-specific frequency effects in phonological development

    PubMed Central

    Edwards, Jan; Beckman, Mary E.

    2009-01-01

    While broad-focus comparisons of consonant inventories across children acquiring different language can suggest that phonological development follows a universal sequence, finer-grained statistical comparisons can reveal systematic differences. This cross-linguistic study of word-initial lingual obstruents examined some effects of language-specific frequencies on consonant mastery. Repetitions of real words were elicited from 2- and 3-year-old children who were monolingual speakers of English, Cantonese, Greek, or Japanese. The repetitions were recorded and transcribed by an adult native speaker for each language. Results found support for both language-universal effects in phonological acquisition and for language-specific influences related to phoneme and phoneme sequence frequency. These results suggest that acquisition patterns that are common across languages arise in two ways. One influence is direct, via the universal constraints imposed by the physiology and physics of speech production and perception, and how these predict which contrasts will be easy and which will be difficult for the child to learn to control. The other influence is indirect, via the way universal principles of ease of perception and production tend to influence the lexicons of many languages through commonly attested sound changes. PMID:19890438

  2. School-Based Speech-Language Pathologists' Knowledge and Perceptions of Autism Spectrum Disorder and Bullying

    ERIC Educational Resources Information Center

    Ofe, Erin E.; Plumb, Allison M.; Plexico, Laura W.; Haak, Nancy J.

    2016-01-01

    Purpose: The purpose of the current investigation was to examine speech-language pathologists' (SLPs') knowledge and perceptions of bullying, with an emphasis on autism spectrum disorder (ASD). Method: A 46-item, web-based survey was used to address the purposes of this investigation. Participants were recruited through e-mail and electronic…

  3. The Bilingual Language Interaction Network for Comprehension of Speech*

    PubMed Central

    Marian, Viorica

    2013-01-01

    During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602

  4. The sounds of sarcasm in English and Cantonese: A cross-linguistic production and perception study

    NASA Astrophysics Data System (ADS)

    Cheang, Henry S.

    Three studies were conducted to examine the acoustic markers of sarcasm in English and in Cantonese, and the manner in which such markers are perceived across these languages. The first study consisted of acoustic analyses of sarcastic utterances spoken in English to verify whether particular prosodic cues correspond to English sarcastic speech. Native English speakers produced utterances expressing sarcasm, sincerity, humour, or neutrality. Measures taken from each utterance included fundamental frequency (F0), amplitude, speech rate, harmonics-to-noise ratio (HNR, to probe voice quality), and one-third octave spectral values (to probe resonance). The second study was conducted to ascertain whether specific acoustic features marked sarcasm in Cantonese and how such features compare with English sarcastic prosody. The elicitation and acoustic analysis methods from the first study were applied to similarly-constructed Cantonese utterances spoken by native Cantonese speakers. Direct acoustic comparisons between Cantonese and English sarcasm exemplars were also made. To further test for cross-linguistic prosodic cues of sarcasm and to assess whether sarcasm could be conveyed across languages, a cross-linguistic perceptual study was then performed. A subset of utterances from the first two studies was presented to naive listeners fluent in either Cantonese or English. Listeners had to identify the attitude in each utterance regardless of language of presentation. Sarcastic utterances in English (regardless of text) were marked by lower mean F0 and reductions in HNR and F0 standard deviation (relative to comparison attitudes). Resonance changes, reductions in both speech rate and F0 range signalled sarcasm in conjunction with some vocabulary terms. By contrast, higher mean F0, amplitude range reductions, and F0 range restrictions corresponded with sarcastic utterances spoken in Cantonese regardless of text. For Cantonese, reduced speech rate and higher HNR interacted with certain vocabulary to mark sarcasm. Sarcastic prosody was most distinguished from acoustic features corresponding to sincere utterances in both languages. Direct English-Cantonese comparisons between sarcasm tokens confirmed cross-linguistic differences in sarcastic prosody. Finally, Cantonese and English listeners could identify sarcasm in their native languages but identified sarcastic utterances spoken in the unfamiliar language at chance levels. It was concluded that particular acoustic cues marked sarcastic speech in Cantonese and English, and these patterns of sarcastic prosody were specific to each language.

  5. Effects of Real-Time Cochlear Implant Simulation on Speech Perception and Production

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.

    2013-01-01

    Real-time use of spoken language is a fundamentally interactive process involving speech perception, speech production, linguistic competence, motor control, neurocognitive abilities such as working memory, attention, and executive function, environmental noise, conversational context, and--critically--the communicative interaction between…

  6. The organization and reorganization of audiovisual speech perception in the first year of life.

    PubMed

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  7. The organization and reorganization of audiovisual speech perception in the first year of life

    PubMed Central

    Danielson, D. Kyle; Bruderer, Alison G.; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F.

    2017-01-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone. PMID:28970650

  8. Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel

    PubMed Central

    Kleinschmidt, Dave F.; Jaeger, T. Florian

    2016-01-01

    Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker’s /p/ might be physically indistinguishable from another talker’s /b/ (cf. lack of invariance). We characterize the computational problem posed by such a subjectively non-stationary world and propose that the speech perception system overcomes this challenge by (1) recognizing previously encountered situations, (2) generalizing to other situations based on previous similar experience, and (3) adapting to novel situations. We formalize this proposal in the ideal adapter framework: (1) to (3) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on two critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires listeners learn to represent the structured component of cross-situation variability in the speech signal. We discuss how these two aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension. PMID:25844873

  9. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    PubMed

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  10. Speech-Language Pathologists' Comfort Levels in English Language Learner Service Delivery

    ERIC Educational Resources Information Center

    Kimble, Carlotta

    2013-01-01

    This study examined speech-language pathologists' (SLPs) comfort levels in providing service delivery to English language learners (ELLs) and limited English proficient (LEP) students. Participants included 192 SLPs from the United States and Guam. Participants completed a brief, six-item questionnaire that investigated their perceptions regarding…

  11. Speech-Language Pathologists' Attitudes and Involvement Regarding Language and Reading.

    ERIC Educational Resources Information Center

    Casby, Michael W.

    1988-01-01

    The study analyzed responses of 105 public school speech-language pathologists to a survey of perceptions of their knowledge, competencies, educational needs, and involvement with children regarding the relationship between oral language and reading disorders. Most reported they were not very involved with children with reading disorders though…

  12. Pitch perception and production in congenital amusia: Evidence from Cantonese speakers.

    PubMed

    Liu, Fang; Chan, Alice H D; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C M

    2016-07-01

    This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production.

  13. Pitch perception and production in congenital amusia: Evidence from Cantonese speakers

    PubMed Central

    Liu, Fang; Chan, Alice H. D.; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C. M.

    2016-01-01

    This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production. PMID:27475178

  14. Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing

    PubMed Central

    Rauschecker, Josef P; Scott, Sophie K

    2010-01-01

    Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production. PMID:19471271

  15. Speech perception and spoken word recognition: past and present.

    PubMed

    Jusezyk, Peter W; Luce, Paul A

    2002-02-01

    The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.

  16. Speech Perception and Short Term Memory Deficits in Persistent Developmental Speech Disorder

    PubMed Central

    Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.

    2008-01-01

    Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech perception and short-term memory. Nine adults with a persistent familial developmental speech disorder without language impairment were compared with 20 controls on tasks requiring the discrimination of fine acoustic cues for word identification and on measures of verbal and nonverbal short-term memory. Significant group differences were found in the slopes of the discrimination curves for first formant transitions for word identification with stop gaps of 40 and 20 ms with effect sizes of 1.60 and 1.56. Significant group differences also occurred on tests of nonverbal rhythm and tonal memory, and verbal short-term memory with effect sizes of 2.38, 1.56 and 1.73. No group differences occurred in the use of stop gap durations for word identification. Because frequency-based speech perception and short-term verbal and nonverbal memory deficits both persisted into adulthood in the speech-impaired adults, these deficits may be involved in the persistence of speech disorders without language impairment. PMID:15896836

  17. Basic to Applied Research: The Benefits of Audio-Visual Speech Perception Research in Teaching Foreign Languages

    ERIC Educational Resources Information Center

    Erdener, Dogu

    2016-01-01

    Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…

  18. Children with Speech, Language and Communication Needs: Their Perceptions of Their Quality of Life

    ERIC Educational Resources Information Center

    Markham, Chris; van Laar, Darren; Gibbard, Deborah; Dean, Taraneh

    2009-01-01

    Background: This study is part of a programme of research aiming to develop a quantitative measure of quality of life for children with communication needs. It builds on the preliminary findings of Markham and Dean (2006), which described some of the perception's parents and carers of children with speech language and communication needs had…

  19. Cross-Language Activation in Children's Speech Production: Evidence from Second Language Learners, Bilinguals, and Trilinguals

    ERIC Educational Resources Information Center

    Poarch, Gregory J.; van Hell, Janet G.

    2012-01-01

    In five experiments, we examined cross-language activation during speech production in various groups of bilinguals and trilinguals who differed in nonnative language proficiency, language learning background, and age. In Experiments 1, 2, 3, and 5, German 5- to 8-year-old second language learners of English, German-English bilinguals,…

  20. A Model for Speech Processing in Second Language Listening Activities

    ERIC Educational Resources Information Center

    Zoghbor, Wafa Shahada

    2016-01-01

    Teachers' understanding of the process of speech perception could inform practice in listening classrooms. Catford (1950) developed a model for speech perception taking into account the influence of the acoustic features of the linguistic forms used by the speaker, whereby the listener "identifies" and "interprets" these…

  1. Perception of the Voicing Distinction in Speech Produced during Simultaneous Communication

    ERIC Educational Resources Information Center

    MacKenzie, Douglas J.; Schiavetti, Nicholas; Whitehead, Robert L.; Metz, Dale Evan

    2006-01-01

    This study investigated the perception of voice onset time (VOT) in speech produced during simultaneous communication (SC). Four normally hearing, experienced sign language users were recorded under SC and speech alone (SA) conditions speaking stimulus words with voiced and voiceless initial consonants embedded in a sentence. Twelve…

  2. Families' perception of children / adolescents with language impairment through the International Classification of Functioning, Disability, and Health (ICF-CY).

    PubMed

    Ostroschi, Daniele Theodoro; Zanolli, Maria de Lurdes; Chun, Regina Yu Shon

    2017-05-22

    To investigate the perception of family members regarding linguistic conditions and social participation of children and adolescents with speech and language impairments using the International Classification of Functioning, Disability and Health - Children and Youth Version (ICF-CY). Quali-quantitative approach research, in which a survey of medical records of 24 children/adolescents undergoing speech-language therapy and interviews with their family members was conducted. A descriptive analysis of the participants' profiles was performed, followed by a categorization of responses using the ICF-CY. All family members mentioned various aspects of speech/language categorized by the ICF-CY. Initially, they approached it as an organic issue, categorized under the component of Body Functions and Structures. Most reported different repercussions of the speech-language impairments on the domains, such as dealing with stress and speaking, qualified from mild to severe. Participants reported Environmental Factors categorized as facilitators in the immediate family's attitudes and as barriers in the social attitudes. These findings, according to the use of the ICF-CY, demonstrate that the children/adolescents' speech-language impairments, from the families' perception, are primarily understood in the body dimension. However, guided by a broader approach to health, the findings in the Activities and Participation and Environmental Factors demonstrate a broader understanding of the participants of the speech-language impairments. The results corroborate the importance of using the ICF-CY as a health care analysis tool, by incorporating functionality and participation aspects and providing subsidies for the construction of unique therapeutic projects in a broader approach to the health of the group studied.

  3. When Hearing Is Tricky: Speech Processing Strategies in Prelingually Deafened Children and Adolescents with Cochlear Implants Having Good and Poor Speech Performance

    PubMed Central

    Ortmann, Magdalene; Zwitserlood, Pienie; Knief, Arne; Baare, Johanna; Brinkheetker, Stephanie; am Zehnhoff-Dinnesen, Antoinette; Dobel, Christian

    2017-01-01

    Cochlear implants provide individuals who are deaf with access to speech. Although substantial advancements have been made by novel technologies, there still is high variability in language development during childhood, depending on adaptation and neural plasticity. These factors have often been investigated in the auditory domain, with the mismatch negativity as an index for sensory and phonological processing. Several studies have demonstrated that the MMN is an electrophysiological correlate for hearing improvement with cochlear implants. In this study, two groups of cochlear implant users, both with very good basic hearing abilities but with non-overlapping speech performance (very good or very poor speech performance), were matched according to device experience and age at implantation. We tested the perception of phonemes in the context of specific other phonemes from which they were very hard to discriminate (e.g., the vowels in /bu/ vs. /bo/). The most difficult pair was individually determined for each participant. Using behavioral measures, both cochlear implants groups performed worse than matched controls, and the good performers performed better than the poor performers. Cochlear implant groups and controls did not differ during time intervals typically used for the mismatch negativity, but earlier: source analyses revealed increased activity in the region of the right supramarginal gyrus (220–260 ms) in good performers. Poor performers showed increased activity in the left occipital cortex (220–290 ms), which may be an index for cross-modal perception. The time course and the neural generators differ from data from our earlier studies, in which the same phonemes were assessed in an easy-to-discriminate context. The results demonstrate that the groups used different language processing strategies, depending on the success of language development and the particular language context. Overall, our data emphasize the role of neural plasticity and use of adaptive strategies for successful language development with cochlear implants. PMID:28056017

  4. The Knowledge and Perceptions of Prospective Teachers and Speech Language Therapists in Collaborative Language and Literacy Instruction

    ERIC Educational Resources Information Center

    Wilson, Leanne; McNeill, Brigid; Gillon, Gail T.

    2015-01-01

    Successful collaboration among speech and language therapists (SLTs) and teachers fosters the creation of communication friendly classrooms that maximize children's spoken and written language learning. However, these groups of professionals may have insufficient opportunity in their professional study to develop the shared knowledge, perceptions…

  5. Alternative Organization of Speech Perception Deficits in Children

    ERIC Educational Resources Information Center

    Gosy, Maria

    2007-01-01

    Children's first-language perception base takes shape gradually from birth onwards. Empirical research has confirmed that children may continue to fall short of age-based expectations in their speech perception. The purpose of this study was to assess the contribution of various perception processes in both reading and learning disabled children.…

  6. How the demographic makeup of our community influences speech perception.

    PubMed

    Lev-Ari, Shiri; Peperkamp, Sharon

    2016-06-01

    Speech perception is known to be influenced by listeners' expectations of the speaker. This paper tests whether the demographic makeup of individuals' communities can influence their perception of foreign sounds by influencing their expectations of the language. Using online experiments with participants from all across the U.S. and matched census data on the proportion of Spanish and other foreign language speakers in participants' communities, this paper shows that the demographic makeup of individuals' communities influences their expectations of foreign languages to have an alveolar trill versus a tap (Experiment 1), as well as their consequent perception of these sounds (Experiment 2). Thus, the paper shows that while individuals' expectations of foreign language to have a trill occasionally lead them to misperceive a tap in a foreign language as a trill, a higher proportion of non-trill language speakers in one's community decreases this likelihood. These results show that individuals' environment can influence their perception by shaping their linguistic expectations.

  7. Speech-Language Pathologists' Perceptions of the Importance and Ability to Use Assistive Technology in the Kingdom of Saudi Arabia

    ERIC Educational Resources Information Center

    Al-Dawaideh, Ahmad Mousa

    2013-01-01

    Speech-language pathologists (SLPs) frequently work with people with severe communication disorders who require assistive technology (AT) for communication. The purpose of this study was to investigate the SLPs perceptions of the importance of and ability level required for using AT, and the relationship of AT with gender, level of education,…

  8. The Acquisitional Value of Recasts in Instructed Second Language Speech Learning: Teaching the Perception and Production of English /?/ to Adult Japanese Learners

    ERIC Educational Resources Information Center

    Saito, Kazuya

    2013-01-01

    The current study investigated the impact of recasts together with form-focused instruction (FFI) on the development of second language speech perception and production of English /?/ by Japanese learners. Forty-five learners were randomly assigned to three groups--FFI recasts, FFI only, and Control--and exposed to four hours of communicatively…

  9. Teaching Elements of English RP Connected Speech and Call: Phonemic Assimilation

    ERIC Educational Resources Information Center

    Veselovska, Ganna

    2016-01-01

    Phonology represents an important part of the English language; however, in the course of English language acquisition, it is rarely treated with proper attention. Connected speech is one of the aspects essential for successful communication, which comprises effective auditory perception and speech production. In this paper I explored phonemic…

  10. Language-Specific Developmental Differences in Speech Production: A Cross-Language Acoustic Study

    ERIC Educational Resources Information Center

    Li, Fangfang

    2012-01-01

    Speech productions of 40 English- and 40 Japanese-speaking children (aged 2-5) were examined and compared with the speech produced by 20 adult speakers (10 speakers per language). Participants were recorded while repeating words that began with "s" and "sh" sounds. Clear language-specific patterns in adults' speech were found,…

  11. Individual Differences in Premotor and Motor Recruitment during Speech Perception

    ERIC Educational Resources Information Center

    Szenkovits, Gayaneh; Peelle, Jonathan E.; Norris, Dennis; Davis, Matthew H.

    2012-01-01

    Although activity in premotor and motor cortices is commonly observed in neuroimaging studies of spoken language processing, the degree to which this activity is an obligatory part of everyday speech comprehension remains unclear. We hypothesised that rather than being a unitary phenomenon, the neural response to speech perception in motor regions…

  12. Development and preliminary evaluation of a pediatric Spanish-English speech perception task.

    PubMed

    Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J

    2014-06-01

    The purpose of this study was to develop a task to evaluate children's English and Spanish speech perception abilities in either noise or competing speech maskers. Eight bilingual Spanish-English and 8 age-matched monolingual English children (ages 4.9-16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish-English talkers. The target stimuli were 30 disyllabic English and Spanish words, familiar to 5-year-olds and easily illustrated. Competing stimuli included either 2-talker English or 2-talker Spanish speech (corresponding to target language) and spectrally matched noise. For both groups of children, regardless of test language, performance was significantly worse for the 2-talker than for the noise masker condition. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Results indicated that the stimuli and task were appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use.

  13. Development and preliminary evaluation of a pediatric Spanish/English speech perception task

    PubMed Central

    Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J.

    2014-01-01

    Purpose To develop a task to evaluate children’s English and Spanish speech perception abilities in either noise or competing speech maskers. Methods Eight bilingual Spanish/English and eight age matched monolingual English children (ages 4.9 –16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish/English talkers. The target stimuli were thirty disyllabic English and Spanish words, familiar to five-year-olds, and easily illustrated. Competing stimuli included either two-talker English or two-talker Spanish speech (corresponding to target language) and spectrally matched noise. Results For both groups of children, regardless of test language, performance was significantly worse for the two-talker than the noise masker. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Conclusions Results indicate that the stimuli and task are appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use. PMID:24686915

  14. The role of left inferior frontal cortex during audiovisual speech perception in infants.

    PubMed

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2016-06-01

    In the first year of life, infants' speech perception attunes to their native language. While the behavioral changes associated with native language attunement are fairly well mapped, the underlying mechanisms and neural processes are still only poorly understood. Using fNIRS and eye tracking, the current study investigated 6-month-old infants' processing of audiovisual speech that contained matching or mismatching auditory and visual speech cues. Our results revealed that infants' speech-sensitive brain responses in inferior frontal brain regions were lateralized to the left hemisphere. Critically, our results further revealed that speech-sensitive left inferior frontal regions showed enhanced responses to matching when compared to mismatching audiovisual speech, and that infants with a preference to look at the speaker's mouth showed an enhanced left inferior frontal response to speech compared to infants with a preference to look at the speaker's eyes. These results suggest that left inferior frontal regions play a crucial role in associating information from different modalities during native language attunement, fostering the formation of multimodal phonological categories. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Audio-visual speech perception in infants and toddlers with Down syndrome, fragile X syndrome, and Williams syndrome.

    PubMed

    D'Souza, Dean; D'Souza, Hana; Johnson, Mark H; Karmiloff-Smith, Annette

    2016-08-01

    Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants. We probed for the McGurk effect, which occurs when the auditory component of one sound (/ba/) is paired with the visual component of another sound (/ga/), leading to the perception of an illusory third sound (/da/ or /tha/). We measured AV integration in 95 infants/toddlers with Down, fragile X, or Williams syndrome, whom we matched on Chronological and Mental Age to 25 TD infants. We also assessed a more basic AV perceptual ability: sensitivity to matching vs. mismatching AV speech stimuli. Infants with Williams syndrome failed to demonstrate a McGurk effect, indicating poor AV speech integration. Moreover, while the TD children discriminated between matching and mismatching AV stimuli, none of the other groups did, hinting at a basic deficit or delay in AV speech processing, which is likely to constrain subsequent language development. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    PubMed

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Clinical Use of AEVP- and AERP-Measures in Childhood Speech Disorders

    ERIC Educational Resources Information Center

    Maassen, Ben; Pasman, Jaco; Nijland, Lian; Rotteveel, Jan

    2006-01-01

    It has long been recognized that from the first months of life auditory perception plays a crucial role in speech and language development. Only in recent years, however, is the precise mechanism of auditory development and its interaction with the acquisition of speech and language beginning to be systematically revealed. This paper presents the…

  18. Verbal Self-Monitoring in the Second Language

    ERIC Educational Resources Information Center

    Broos, Wouter P. J.; Duyck, Wouter; Hartsuiker, Robert J.

    2016-01-01

    Speakers monitor their own speech for errors. To do so, they may rely on perception of their own speech (external monitoring) but also on an internal speech representation (internal monitoring). While there are detailed accounts of monitoring in first language (L1) processing, it is not clear if and how monitoring is different in a second language…

  19. The Acquisition of Consonant Clusters by Japanese Learners of English: Interactions of Speech Perception and Production

    ERIC Educational Resources Information Center

    Sperbeck, Mieko

    2010-01-01

    The primary aim of this dissertation was to investigate the relationship between speech perception and speech production difficulties among Japanese second language (L2) learners of English, in their learning complex syllable structures. Japanese L2 learners and American English controls were tested in a categorical ABX discrimination task of…

  20. Unattended Exposure to Components of Speech Sounds Yields Same Benefits as Explicit Auditory Training

    ERIC Educational Resources Information Center

    Seitz, Aaron R.; Protopapas, Athanassios; Tsushima, Yoshiaki; Vlahou, Eleni L.; Gori, Simone; Grossberg, Stephen; Watanabe, Takeo

    2010-01-01

    Learning a second language as an adult is particularly effortful when new phonetic representations must be formed. Therefore the processes that allow learning of speech sounds are of great theoretical and practical interest. Here we examined whether perception of single formant transitions, that is, sound components critical in speech perception,…

  1. Perceptual Judgments of Accented Speech by Listeners from Different First Language Backgrounds

    ERIC Educational Resources Information Center

    Kang, Okim; Vo, Son Ca Thanh; Moran, Meghan Kerry

    2016-01-01

    Research in second language speech has often focused on listeners' accent judgment and factors that affect their perception. However, the topic of listeners' application of specific sound categories in their own perceptual judgments has not been widely investigated. The current study explored how listeners from diverse language backgrounds weighed…

  2. Speech Perception Abilities of Adults with Dyslexia: Is There Any Evidence for a True Deficit?

    ERIC Educational Resources Information Center

    Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart; Nouwens, Suzan; Shakespeare, Bethanie

    2009-01-01

    Purpose: This study investigated whether adults with dyslexia show evidence of a consistent speech perception deficit by testing phoneme categorization and word perception in noise. Method: Seventeen adults with dyslexia and 20 average readers underwent a test battery including standardized reading, language and phonological awareness tests, and…

  3. "It's the Way You Talk to Them." The Child's Environment: Early Years Practitioners' Perceptions of Its Influence on Speech and Language Development, Its Assessment and Environment Targeted Interventions

    ERIC Educational Resources Information Center

    Marshall, Julie; Lewis, Elizabeth

    2014-01-01

    Speech and language delay occurs in approximately 6% of the child population, and interventions to support this group of children focus on the child and/or the communicative environment. Evidence about the effectiveness of interventions that focus on the environment as well as the (reported) practices of speech and language therapists (SLTs) and…

  4. Differential neural contributions to native- and foreign-language talker identification

    PubMed Central

    Perrachione, Tyler K.; Pierrehumbert, Janet B.; Wong, Patrick C.M.

    2009-01-01

    Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system’s ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies describing the language-familiarity effect implicate functionally integrated neural systems for speech and voice perception, yet specific neuroscientific evidence demonstrating the basis for such integration has not yet been shown. Listeners in the present study learned to identify voices speaking a familiar (native) or unfamiliar (foreign) language. The talker-identification performance of neural circuitry in each cerebral hemisphere was assessed using dichotic listening. To determine the relative contribution of circuitry in each hemisphere to ecological (binaural) talker identification abilities, we compared the predictive capacity of dichotic performance on binaural performance across languages. We found listeners’ right-ear (left hemisphere) performance to be a better predictor of overall accuracy in their native language than a foreign one. The enhanced predictive capacity of the classically language-dominant left-hemisphere on overall talker-identification accuracy demonstrates functionally integrated neural systems for speech and voice perception during natural talker identification. PMID:19968445

  5. Auditory Processing in Specific Language Impairment (SLI): Relations with the Perception of Lexical and Phrasal Stress

    ERIC Educational Resources Information Center

    Richards, Susan; Goswami, Usha

    2015-01-01

    Purpose: We investigated whether impaired acoustic processing is a factor in developmental language disorders. The amplitude envelope of the speech signal is known to be important in language processing. We examined whether impaired perception of amplitude envelope rise time is related to impaired perception of lexical and phrasal stress in…

  6. Pitch Perception in Tone Language-Speaking Adults With and Without Autism Spectrum Disorders

    PubMed Central

    Cheng, Stella T. T.; Lam, Gary Y. H.

    2017-01-01

    Enhanced low-level pitch perception has been universally reported in autism spectrum disorders (ASD). This study examined whether tone language speakers with ASD exhibit this advantage. The pitch perception skill of 20 Cantonese-speaking adults with ASD was compared with that of 20 neurotypical individuals. Participants discriminated pairs of real syllable, pseudo-syllable (syllables that do not conform the phonotactic rules or are accidental gaps), and non-speech (syllables with attenuated high-frequency segmental content) stimuli contrasting pitch levels. The results revealed significantly higher discrimination ability in both groups for the non-speech stimuli than for the pseudo-syllables with one semitone difference. No significant group differences were noted. Different from previous findings, post hoc analysis found that enhanced pitch perception was observed in a subgroup of participants with ASD showing no history of delayed speech onset. The tone language experience may have modulated the pitch processing mechanism in the speakers in both ASD and non-ASD groups. PMID:28616150

  7. Effects of cross-language voice training on speech perception: Whose familiar voices are more intelligible?

    PubMed Central

    Levi, Susannah V.; Winters, Stephen J.; Pisoni, David B.

    2011-01-01

    Previous research has shown that familiarity with a talker’s voice can improve linguistic processing (herein, “Familiar Talker Advantage”), but this benefit is constrained by the context in which the talker’s voice is familiar. The current study examined how familiarity affects intelligibility by manipulating the type of talker information available to listeners. One group of listeners learned to identify bilingual talkers’ voices from English words, where they learned language-specific talker information. A second group of listeners learned the same talkers from German words, and thus only learned language-independent talker information. After voice training, both groups of listeners completed a word recognition task with English words produced by both familiar and unfamiliar talkers. Results revealed that English-trained listeners perceived more phonemes correct for familiar than unfamiliar talkers, while German-trained listeners did not show improved intelligibility for familiar talkers. The absence of a processing advantage in speech intelligibility for the German-trained listeners demonstrates limitations on the Familiar Talker Advantage, which crucially depends on the language context in which the talkers’ voices were learned; knowledge of how a talker produces linguistically relevant contrasts in a particular language is necessary to increase speech intelligibility for words produced by familiar talkers. PMID:22225059

  8. Visual cortex entrains to sign language.

    PubMed

    Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel

    2017-06-13

    Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at [Formula: see text]1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.

  9. Using Visible Speech to Train Perception and Production of Speech for Individuals with Hearing Loss.

    ERIC Educational Resources Information Center

    Massaro, Dominic W.; Light, Joanna

    2004-01-01

    The main goal of this study was to implement a computer-animated talking head, Baldi, as a language tutor for speech perception and production for individuals with hearing loss. Baldi can speak slowly; illustrate articulation by making the skin transparent to reveal the tongue, teeth, and palate; and show supplementary articulatory features, such…

  10. Second Language Ability and Emotional Prosody Perception

    PubMed Central

    Bhatara, Anjali; Laukka, Petri; Boll-Avetisyan, Natalie; Granjon, Lionel; Anger Elfenbein, Hillary; Bänziger, Tanja

    2016-01-01

    The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions. PMID:27253326

  11. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Musical experience facilitates lexical tone processing among Mandarin speakers: Behavioral and neural evidence.

    PubMed

    Tang, Wei; Xiong, Wen; Zhang, Yu-Xuan; Dong, Qi; Nan, Yun

    2016-10-01

    Music and speech share many sound attributes. Pitch, as the percept of fundamental frequency, often occupies the center of researchers' attention in studies on the relationship between music and speech. One widely held assumption is that music experience may confer an advantage in speech tone processing. The cross-domain effects of musical training on non-tonal language speakers' linguistic pitch processing have been relatively well established. However, it remains unclear whether musical experience improves the processing of lexical tone for native tone language speakers who actually use lexical tones in their daily communication. Using a passive oddball paradigm, the present study revealed that among Mandarin speakers, musicians demonstrated enlarged electrical responses to lexical tone changes as reflected by the increased mismatch negativity (MMN) amplitudes, as well as faster behavioral discrimination performance compared with age- and IQ-matched nonmusicians. The current results suggest that in spite of the preexisting long-term experience with lexical tones in both musicians and nonmusicians, musical experience can still modulate the cortical plasticity of linguistic tone processing and is associated with enhanced neural processing of speech tones. Our current results thus provide the first electrophysiological evidence supporting the notion that pitch expertise in the music domain may indeed be transferable to the speech domain even for native tone language speakers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Listening to Accented Speech in a Second Language: First Language and Age of Acquisition Effects

    ERIC Educational Resources Information Center

    Larraza, Saioa; Samuel, Arthur G.; Oñederra, Miren Lourdes

    2016-01-01

    Bilingual speakers must acquire the phonemic inventory of 2 languages and need to recognize spoken words cross-linguistically; a demanding job potentially made even more difficult due to dialectal variation, an intrinsic property of speech. The present work examines how bilinguals perceive second language (L2) accented speech and where…

  14. Toward Establishing Continuity in Linguistic Skills within Early Infancy

    ERIC Educational Resources Information Center

    Seidl, Amanda; French, Brian; Wang, Yuanyuan; Cristia, Alejandrina

    2014-01-01

    A growing research line documents significant bivariate correlations between individual measures of speech perception gathered in infancy and concurrent or later vocabulary size. One interpretation of this correlation is that it reflects language specificity: Both speech perception tasks and the development of the vocabulary recruit the…

  15. Communicating by Language: The Speech Process.

    ERIC Educational Resources Information Center

    House, Arthur S., Ed.

    This document reports on a conference focused on speech problems. The main objective of these discussions was to facilitate a deeper understanding of human communication through interaction of conference participants with colleagues in other disciplines. Topics discussed included speech production, feedback, speech perception, and development of…

  16. Crosslinguistic application of English-centric rhythm descriptors in motor speech disorders.

    PubMed

    Liss, Julie M; Utianski, Rene; Lansford, Kaitlin

    2013-01-01

    Rhythmic disturbances are a hallmark of motor speech disorders, in which the motor control deficits interfere with the outward flow of speech and by extension speech understanding. As the functions of rhythm are language-specific, breakdowns in rhythm should have language-specific consequences for communication. The goals of this paper are to (i) provide a review of the cognitive-linguistic role of rhythm in speech perception in a general sense and crosslinguistically; (ii) present new results of lexical segmentation challenges posed by different types of dysarthria in American English, and (iii) offer a framework for crosslinguistic considerations for speech rhythm disturbances in the diagnosis and treatment of communication disorders associated with motor speech disorders. This review presents theoretical and empirical reasons for considering speech rhythm as a critical component of communication deficits in motor speech disorders, and addresses the need for crosslinguistic research to explore language-universal versus language-specific aspects of motor speech disorders. Copyright © 2013 S. Karger AG, Basel.

  17. Crosslinguistic Application of English-Centric Rhythm Descriptors in Motor Speech Disorders

    PubMed Central

    Liss, Julie M.; Utianski, Rene; Lansford, Kaitlin

    2014-01-01

    Background Rhythmic disturbances are a hallmark of motor speech disorders, in which the motor control deficits interfere with the outward flow of speech and by extension speech understanding. As the functions of rhythm are language-specific, breakdowns in rhythm should have language-specific consequences for communication. Objective The goals of this paper are to (i) provide a review of the cognitive- linguistic role of rhythm in speech perception in a general sense and crosslinguistically; (ii) present new results of lexical segmentation challenges posed by different types of dysarthria in American English, and (iii) offer a framework for crosslinguistic considerations for speech rhythm disturbances in the diagnosis and treatment of communication disorders associated with motor speech disorders. Summary This review presents theoretical and empirical reasons for considering speech rhythm as a critical component of communication deficits in motor speech disorders, and addresses the need for crosslinguistic research to explore language-universal versus language-specific aspects of motor speech disorders. PMID:24157596

  18. Perceptions of Staff on Embedding Speech and Language Therapy within a Youth Offending Team

    ERIC Educational Resources Information Center

    Bryan, Karen; Gregory, Juliette

    2013-01-01

    The purpose of this research was to ascertain the views of staff and managers within a youth offending team on their experiences of working with a speech and language therapist (SLT). The model of therapy provision was similar to the whole-systems approach used in schools. The impact of the service on language outcomes is reported elsewhere…

  19. The Emergence of L2 Phonological Contrast in Perception: The Case of Korean Sibilant Fricatives

    ERIC Educational Resources Information Center

    Holliday, Jeffrey J.

    2012-01-01

    The perception of non-native speech sounds is heavily influenced by the acoustic cues that are relevant for differentiating members of a listener's native (L1) phonological contrasts. Many studies of both (naive) non-native and (not naive) second language (L2) speech perception implicitly assume continuity in a listener's habits of…

  20. Children's Speech Perception in Noise: Evidence for Dissociation From Language and Working Memory.

    PubMed

    Magimairaj, Beula M; Nagaraj, Naveen K; Benafield, Natalie J

    2018-05-17

    We examined the association between speech perception in noise (SPIN), language abilities, and working memory (WM) capacity in school-age children. Existing studies supporting the Ease of Language Understanding (ELU) model suggest that WM capacity plays a significant role in adverse listening situations. Eighty-three children between the ages of 7 to 11 years participated. The sample represented a continuum of individual differences in attention, memory, and language abilities. All children had normal-range hearing and normal-range nonverbal IQ. Children completed the Bamford-Kowal-Bench Speech-in-Noise Test (BKB-SIN; Etymotic Research, 2005), a selective auditory attention task, and multiple measures of language and WM. Partial correlations (controlling for age) showed significant positive associations among attention, memory, and language measures. However, BKB-SIN did not correlate significantly with any of the other measures. Principal component analysis revealed a distinct WM factor and a distinct language factor. BKB-SIN loaded robustly as a distinct 3rd factor with minimal secondary loading from sentence recall and short-term memory. Nonverbal IQ loaded as a 4th factor. Results did not support an association between SPIN and WM capacity in children. However, in this study, a single SPIN measure was used. Future studies using multiple SPIN measures are warranted. Evidence from the current study supports the use of BKB-SIN as clinical measure of speech perception ability because it was not influenced by variation in children's language and memory abilities. More large-scale studies in school-age children are needed to replicate the proposed role played by WM in adverse listening situations.

  1. Acquisition by Processing Theory: A Theory of Everything?

    ERIC Educational Resources Information Center

    Carroll, Susanne E.

    2004-01-01

    Truscott and Sharwood Smith (henceforth T&SS) propose a novel theory of language acquisition, "Acquisition by Processing Theory" (APT), designed to account for both first and second language acquisition, monolingual and bilingual speech perception and parsing, and speech production. This is a tall order. Like any theoretically ambitious…

  2. What Does the Right Hemisphere Know about Phoneme Categories?

    ERIC Educational Resources Information Center

    Wolmetz, Michael; Poeppel, David; Rapp, Brenda

    2011-01-01

    Innate auditory sensitivities and familiarity with the sounds of language give rise to clear influences of phonemic categories on adult perception of speech. With few exceptions, current models endorse highly left-hemisphere-lateralized mechanisms responsible for the influence of phonemic category on speech perception, based primarily on results…

  3. Status Report on Speech Research, No. 27, July-September 1971.

    ERIC Educational Resources Information Center

    Haskins Labs., New Haven, CT.

    This report contains fourteen papers on a wide range of current topics and experiments in speech research, ranging from the relationship between speech and reading to questions of memory and perception of speech sounds. The following papers are included: "How Is Language Conveyed by Speech?;""Reading, the Linguistic Process, and Linguistic…

  4. Monkey Lipsmacking Develops Like the Human Speech Rhythm

    ERIC Educational Resources Information Center

    Morrill, Ryan J.; Paukner, Annika; Ferrari, Pier F.; Ghazanfar, Asif A.

    2012-01-01

    Across all languages studied to date, audiovisual speech exhibits a consistent rhythmic structure. This rhythm is critical to speech perception. Some have suggested that the speech rhythm evolved "de novo" in humans. An alternative account--the one we explored here--is that the rhythm of speech evolved through the modification of rhythmic facial…

  5. Bidirectional clear speech perception benefit for native and high-proficiency non-native talkers and listeners: Intelligibility and accentednessa

    PubMed Central

    Smiljanić, Rajka; Bradlow, Ann R.

    2011-01-01

    This study investigated how native language background interacts with speaking style adaptations in determining levels of speech intelligibility. The aim was to explore whether native and high proficiency non-native listeners benefit similarly from native and non-native clear speech adjustments. The sentence-in-noise perception results revealed that fluent non-native listeners gained a large clear speech benefit from native clear speech modifications. Furthermore, proficient non-native talkers in this study implemented conversational-to-clear speaking style modifications in their second language (L2) that resulted in significant intelligibility gain for both native and non-native listeners. The results of the accentedness ratings obtained for native and non-native conversational and clear speech sentences showed that while intelligibility was improved, the presence of foreign accent remained constant in both speaking styles. This suggests that objective intelligibility and subjective accentedness are two independent dimensions of non-native speech. Overall, these results provide strong evidence that greater experience in L2 processing leads to improved intelligibility in both production and perception domains. These results also demonstrated that speaking style adaptations along with less signal distortion can contribute significantly towards successful native and non-native interactions. PMID:22225056

  6. Nonlinear Frequency Compression in Hearing Aids: Impact on Speech and Language Development

    PubMed Central

    Bentler, Ruth; Walker, Elizabeth; McCreery, Ryan; Arenas, Richard M.; Roush, Patricia

    2015-01-01

    Objectives The research questions of this study were: (1) Are children using nonlinear frequency compression (NLFC) in their hearing aids getting better access to the speech signal than children using conventional processing schemes? The authors hypothesized that children whose hearing aids provided wider input bandwidth would have more access to the speech signal, as measured by an adaptation of the Speech Intelligibility Index, and (2) are speech and language skills different for children who have been fit with the two different technologies; if so, in what areas? The authors hypothesized that if the children were getting increased access to the speech signal as a result of their NLFC hearing aids (question 1), it would be possible to see improved performance in areas of speech production, morphosyntax, and speech perception compared with the group with conventional processing. Design Participants included 66 children with hearing loss recruited as part of a larger multisite National Institutes of Health–funded study, Outcomes for Children with Hearing Loss, designed to explore the developmental outcomes of children with mild to severe hearing loss. For the larger study, data on communication, academic and psychosocial skills were gathered in an accelerated longitudinal design, with entry into the study between 6 months and 7 years of age. Subjects in this report consisted of 3-, 4-, and 5-year-old children recruited at the North Carolina test site. All had at least at least 6 months of current hearing aid usage with their NLFC or conventional amplification. Demographic characteristics were compared at the three age levels as well as audibility and speech/language outcomes; speech-perception scores were compared for the 5-year-old groups. Results Results indicate that the audibility provided did not differ between the technology options. As a result, there was no difference between groups on speech or language outcome measures at 4 or 5 years of age, and no impact on speech perception (measured at 5 years of age). The difference in Comprehensive Assessment of Spoken Language and mean length of utterance scores for the 3-year-old group favoring the group with conventional amplification may be a consequence of confounding factors such as increased incidence of prematurity in the group using NLFC. Conclusions Children fit with NLFC had similar audibility, as measured by a modified Speech Intelligibility Index, compared with a matched group of children using conventional technology. In turn, there were no differences in their speech and language abilities. PMID:24892229

  7. Nonlinear frequency compression in hearing aids: impact on speech and language development.

    PubMed

    Bentler, Ruth; Walker, Elizabeth; McCreery, Ryan; Arenas, Richard M; Roush, Patricia

    2014-01-01

    The research questions of this study were: (1) Are children using nonlinear frequency compression (NLFC) in their hearing aids getting better access to the speech signal than children using conventional processing schemes? The authors hypothesized that children whose hearing aids provided wider input bandwidth would have more access to the speech signal, as measured by an adaptation of the Speech Intelligibility Index, and (2) are speech and language skills different for children who have been fit with the two different technologies; if so, in what areas? The authors hypothesized that if the children were getting increased access to the speech signal as a result of their NLFC hearing aids (question 1), it would be possible to see improved performance in areas of speech production, morphosyntax, and speech perception compared with the group with conventional processing. Participants included 66 children with hearing loss recruited as part of a larger multisite National Institutes of Health-funded study, Outcomes for Children with Hearing Loss, designed to explore the developmental outcomes of children with mild to severe hearing loss. For the larger study, data on communication, academic and psychosocial skills were gathered in an accelerated longitudinal design, with entry into the study between 6 months and 7 years of age. Subjects in this report consisted of 3-, 4-, and 5-year-old children recruited at the North Carolina test site. All had at least at least 6 months of current hearing aid usage with their NLFC or conventional amplification. Demographic characteristics were compared at the three age levels as well as audibility and speech/language outcomes; speech-perception scores were compared for the 5-year-old groups. Results indicate that the audibility provided did not differ between the technology options. As a result, there was no difference between groups on speech or language outcome measures at 4 or 5 years of age, and no impact on speech perception (measured at 5 years of age). The difference in Comprehensive Assessment of Spoken Language and mean length of utterance scores for the 3-year-old group favoring the group with conventional amplification may be a consequence of confounding factors such as increased incidence of prematurity in the group using NLFC. Children fit with NLFC had similar audibility, as measured by a modified Speech Intelligibility Index, compared with a matched group of children using conventional technology. In turn, there were no differences in their speech and language abilities.

  8. SUS users' perception: a speech-language pathology approach based on health promotion.

    PubMed

    Cunha, Jenane Topanotti da; Massi, Giselle; Guarinello, Ana Cristina; Pereira, Francine Martins

    2016-01-01

    This study aimed to analyze the perceptions of users of the Brazilian Unified Health System (SUS) about the treatment Center where they were assisted, as well as about the speech-language pathology services rendered by this Center. This is a transversal study composed of an interview with 26 open questions and 14 closed questions applied to 111 individuals who were assisted at the SUS Center in August 2013. The quantitative content analysis was conducted through the use of the GraphPadPrisma 5.1, Statistic Package for Social Sciences (SPSS) 15.0 software and the application of the D'agostino & Person, F-test and chi-squared test. Most participants reported a positive perception about the facilities and staff of the treatment center. They were also positive about the waiting time and the speech-language pathologists' explanations and conduct, especially in the audiology department. Most responses from participants were short and did not present an argumentative context. The treatment center received a high approval rating by most users. The audiology department had better grades than the clinical services related to language and oral motor pathologies.

  9. Phonological Encoding in Speech-Sound Disorder: Evidence from a Cross-Modal Priming Experiment

    ERIC Educational Resources Information Center

    Munson, Benjamin; Krause, Miriam O. P.

    2017-01-01

    Background: Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. Aims: To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to…

  10. Hearing versus Listening: Attention to Speech and Its Role in Language Acquisition in Deaf Infants with Cochlear Implants

    PubMed Central

    Houston, Derek M.; Bergeson, Tonya R.

    2013-01-01

    The advent of cochlear implantation has provided thousands of deaf infants and children access to speech and the opportunity to learn spoken language. Whether or not deaf infants successfully learn spoken language after implantation may depend in part on the extent to which they listen to speech rather than just hear it. We explore this question by examining the role that attention to speech plays in early language development according to a prominent model of infant speech perception – Jusczyk’s WRAPSA model – and by reviewing the kinds of speech input that maintains normal-hearing infants’ attention. We then review recent findings suggesting that cochlear-implanted infants’ attention to speech is reduced compared to normal-hearing infants and that speech input to these infants differs from input to infants with normal hearing. Finally, we discuss possible roles attention to speech may play on deaf children’s language acquisition after cochlear implantation in light of these findings and predictions from Jusczyk’s WRAPSA model. PMID:24729634

  11. Intensive Foreign Language Learning Reveals Effects on Categorical Perception of Sibilant Voicing After Only 3 Weeks

    PubMed Central

    Horn, Nynne Thorup; Sørensen, Stine Derdau; McGregor, William B.; Wallentin, Mikkel

    2015-01-01

    Models of speech learning suggest that adaptations to foreign language sound categories take place within 6 to 12 months of exposure to a foreign language. Results from laboratory language training show effects of very targeted training on nonnative speech contrasts within only 1 to 4 weeks of training. Results from immersion studies are inconclusive, but some suggest continued effects on nonnative speech perception after 6 to 8 years of experience. We investigated this apparent discrepancy in the timing of adaptation to foreign speech sounds in a longitudinal study of foreign language learning. We examined two groups of Danish language officer cadets learning either Arabic (Modern Standard Arabic and Egyptian Arabic) or Dari (Afghan Farsi) through intensive multifaceted language training. We conducted two experiments (identification and discrimination) with the cadets who were tested four times: at the start (T0), after 3 weeks (T1), 6 months (T2), and 19 months (T3). We used a phonemic Arabic contrast (pharyngeal vs. glottal frication) and a phonemic Dari contrast (sibilant voicing) as stimuli. We observed an effect of learning on the Dari learners’ identification of the Dari stimuli already after 3 weeks of language training, which was sustained, but not improved, after 6 and 19 months. The changes in the Dari learners’ identification functions were positively correlated with their grades after 6 months. We observed no other learning effects at the group level. We discuss the results in the light of predictions from speech learning models. PMID:27551355

  12. Status report on speech research. A report on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications

    NASA Astrophysics Data System (ADS)

    Liberman, A. M.

    1984-08-01

    This report (1 January-30 June) is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. Manuscripts cover the following topics: Sources of variability in early speech development; Invariance: Functional or descriptive?; Brief comments on invariance in phonetic perception; Phonetic category boundaries are flexible; On categorizing asphasic speech errors; Universal and language particular aspects of vowel-to-vowel coarticulation; Functional specific articulatory cooperation following jaw perturbation; during speech: Evidence for coordinative structures; Formant integration and the perception of nasal vowel height; Relative power of cues: FO shifts vs. voice timing; Laryngeal management at utterance-internal word boundary in American English; Closure duration and release burst amplitude cues to stop consonant manner and place of articulation; Effects of temporal stimulus properties on perception of the (sl)-(spl) distinction; The physics of controlled conditions: A reverie about locomotion; On the perception of intonation from sinusoidal sentences; Speech Perception; Speech Articulation; Motor Control; Speech Development.

  13. Phonological Awareness and Print Knowledge of Preschool Children with Cochlear Implants

    PubMed Central

    Ambrose, Sophie E.; Fey, Marc E.; Eisenberg, Laurie S.

    2012-01-01

    Purpose To determine whether preschool-age children with cochlear implants have age-appropriate phonological awareness and print knowledge and to examine the relationships of these skills with related speech and language abilities. Method 24 children with cochlear implants (CIs) and 23 peers with normal hearing (NH), ages 36 to 60 months, participated. Children’s print knowledge, phonological awareness, language, speech production, and speech perception abilities were assessed. Results For phonological awareness, the CI group’s mean score fell within 1 standard deviation of the TOPEL’s normative sample mean but was more than 1 standard deviation below our NH group mean. The CI group’s performance did not differ significantly from that of the NH group for print knowledge. For the CI group, phonological awareness and print knowledge were significantly correlated with language, speech production, and speech perception. Together, these predictor variables accounted for 34% of variance in the CI group’s phonological awareness but no significant variance in their print knowledge. Conclusions Children with CIs have the potential to develop age-appropriate early literacy skills by preschool-age but are likely to lag behind their NH peers in phonological awareness. Intervention programs serving these children should target these skills with instruction and by facilitating speech and language development. PMID:22223887

  14. DIMENSION-BASED STATISTICAL LEARNING OF VOWELS

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2015-01-01

    Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners’ baseline perceptual weighting of two acoustic dimensions (spectral quality and vowel duration) towards vowel categorization and examine how they subsequently adapt to an “artificial accent” that deviates from English norms in the correlation between the two dimensions. At baseline, listeners rely relatively more on spectral quality than vowel duration to signal vowel category, but duration nonetheless contributes. Upon encountering an “artificial accent” in which the spectral-duration correlation is perturbed relative to English language norms, listeners rapidly down-weight reliance on duration. Listeners exhibit this type of short-term statistical learning even in the context of nonwords, confirming that lexical information is not necessary to this form of adaptive plasticity in speech perception. Moreover, learning generalizes to both novel lexical contexts and acoustically-distinct altered voices. These findings are discussed in the context of a mechanistic proposal for how supervised learning may contribute to this type of adaptive plasticity in speech perception. PMID:26280268

  15. How auditory discontinuities and linguistic experience affect the perception of speech and non-speech in English- and Spanish-speaking listeners

    NASA Astrophysics Data System (ADS)

    Hay, Jessica F.; Holt, Lori L.; Lotto, Andrew J.; Diehl, Randy L.

    2005-04-01

    The present study was designed to investigate the effects of long-term linguistic experience on the perception of non-speech sounds in English and Spanish speakers. Research using tone-onset-time (TOT) stimuli, a type of non-speech analogue of voice-onset-time (VOT) stimuli, has suggested that there is an underlying auditory basis for the perception of stop consonants based on a threshold for detecting onset asynchronies in the vicinity of +20 ms. For English listeners, stop consonant labeling boundaries are congruent with the positive auditory discontinuity, while Spanish speakers place their VOT labeling boundaries and discrimination peaks in the vicinity of 0 ms VOT. The present study addresses the question of whether long-term linguistic experience with different VOT categories affects the perception of non-speech stimuli that are analogous in their acoustic timing characteristics. A series of synthetic VOT stimuli and TOT stimuli were created for this study. Using language appropriate labeling and ABX discrimination tasks, labeling boundaries (VOT) and discrimination peaks (VOT and TOT) are assessed for 24 monolingual English speakers and 24 monolingual Spanish speakers. The interplay between language experience and auditory biases are discussed. [Work supported by NIDCD.

  16. Foreign Subtitles Help but Native-Language Subtitles Harm Foreign Speech Perception

    PubMed Central

    Mitterer, Holger; McQueen, James M.

    2009-01-01

    Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken. PMID:19918371

  17. Assessment of rhythmic entrainment at multiple timescales in dyslexia: evidence for disruption to syllable timing.

    PubMed

    Leong, Victoria; Goswami, Usha

    2014-02-01

    Developmental dyslexia is associated with rhythmic difficulties, including impaired perception of beat patterns in music and prosodic stress patterns in speech. Spoken prosodic rhythm is cued by slow (<10 Hz) fluctuations in speech signal amplitude. Impaired neural oscillatory tracking of these slow amplitude modulation (AM) patterns is one plausible source of impaired rhythm tracking in dyslexia. Here, we characterise the temporal profile of the dyslexic rhythm deficit by examining rhythmic entrainment at multiple speech timescales. Adult dyslexic participants completed two experiments aimed at testing the perception and production of speech rhythm. In the perception task, participants tapped along to the beat of 4 metrically-regular nursery rhyme sentences. In the production task, participants produced the same 4 sentences in time to a metronome beat. Rhythmic entrainment was assessed using both traditional rhythmic indices and a novel AM-based measure, which utilised 3 dominant AM timescales in the speech signal each associated with a different phonological grain-sized unit (0.9-2.5 Hz, prosodic stress; 2.5-12 Hz, syllables; 12-40 Hz, phonemes). The AM-based measure revealed atypical rhythmic entrainment by dyslexic participants to syllable patterns in speech, in perception and production. In the perception task, both groups showed equally strong phase-locking to Syllable AM patterns, but dyslexic responses were entrained to a significantly earlier oscillatory phase angle than controls. In the production task, dyslexic utterances showed shorter syllable intervals, and differences in Syllable:Phoneme AM cross-frequency synchronisation. Our data support the view that rhythmic entrainment at slow (∼5 Hz, Syllable) rates is atypical in dyslexia, suggesting that neural mechanisms for syllable perception and production may also be atypical. These syllable timing deficits could contribute to the atypical development of phonological representations for spoken words, the central cognitive characteristic of developmental dyslexia across languages. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Assessment of rhythmic entrainment at multiple timescales in dyslexia: Evidence for disruption to syllable timing☆

    PubMed Central

    Leong, Victoria; Goswami, Usha

    2014-01-01

    Developmental dyslexia is associated with rhythmic difficulties, including impaired perception of beat patterns in music and prosodic stress patterns in speech. Spoken prosodic rhythm is cued by slow (<10 Hz) fluctuations in speech signal amplitude. Impaired neural oscillatory tracking of these slow amplitude modulation (AM) patterns is one plausible source of impaired rhythm tracking in dyslexia. Here, we characterise the temporal profile of the dyslexic rhythm deficit by examining rhythmic entrainment at multiple speech timescales. Adult dyslexic participants completed two experiments aimed at testing the perception and production of speech rhythm. In the perception task, participants tapped along to the beat of 4 metrically-regular nursery rhyme sentences. In the production task, participants produced the same 4 sentences in time to a metronome beat. Rhythmic entrainment was assessed using both traditional rhythmic indices and a novel AM-based measure, which utilised 3 dominant AM timescales in the speech signal each associated with a different phonological grain-sized unit (0.9–2.5 Hz, prosodic stress; 2.5–12 Hz, syllables; 12–40 Hz, phonemes). The AM-based measure revealed atypical rhythmic entrainment by dyslexic participants to syllable patterns in speech, in perception and production. In the perception task, both groups showed equally strong phase-locking to Syllable AM patterns, but dyslexic responses were entrained to a significantly earlier oscillatory phase angle than controls. In the production task, dyslexic utterances showed shorter syllable intervals, and differences in Syllable:Phoneme AM cross-frequency synchronisation. Our data support the view that rhythmic entrainment at slow (∼5 Hz, Syllable) rates is atypical in dyslexia, suggesting that neural mechanisms for syllable perception and production may also be atypical. These syllable timing deficits could contribute to the atypical development of phonological representations for spoken words, the central cognitive characteristic of developmental dyslexia across languages. This article is part of a Special Issue entitled . PMID:23916752

  19. Speech research: Studies on the nature of speech, instrumentation for its investigation, and practical applications

    NASA Astrophysics Data System (ADS)

    Liberman, A. M.

    1982-03-01

    This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation and practical applications. Manuscripts cover the following topics: Speech perception and memory coding in relation to reading ability; The use of orthographic structure by deaf adults: Recognition of finger-spelled letters; Exploring the information support for speech; The stream of speech; Using the acoustic signal to make inferences about place and duration of tongue-palate contact. Patterns of human interlimb coordination emerge from the the properties of nonlinear limit cycle oscillatory processes: Theory and data; Motor control: Which themes do we orchestrate? Exploring the nature of motor control in Down's syndrome; Periodicity and auditory memory: A pilot study; Reading skill and language skill: On the role of sign order and morphological structure in memory for American Sign Language sentences; Perception of nasal consonants with special reference to Catalan; and Speech production Characteristics of the hearing impaired.

  20. The Role of Visual Image and Perception in Speech Development of Children with Speech Pathology

    ERIC Educational Resources Information Center

    Tsvetkova, L. S.; Kuznetsova, T. M.

    1977-01-01

    Investigated with 125 children (4-14 years old) with speech, language, or emotional disorders was the assumption that the naming function can be underdeveloped because of defects in the word's gnostic base. (Author/DB)

  1. Hallucination- and speech-specific hypercoupling in frontotemporal auditory and language networks in schizophrenia using combined task-based fMRI data: An fBIRN study.

    PubMed

    Lavigne, Katie M; Woodward, Todd S

    2018-04-01

    Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.

  2. Articulating What Infants Attune to in Native Speech

    PubMed Central

    Best, Catherine T.; Goldstein, Louis M.; Nam, Hosung; Tyler, Michael D.

    2016-01-01

    ABSTRACT To become language users, infants must embrace the integrality of speech perception and production. That they do so, and quite rapidly, is implied by the native-language attunement they achieve in each domain by 6–12 months. Yet research has most often addressed one or the other domain, rarely how they interrelate. Moreover, mainstream assumptions that perception relies on acoustic patterns whereas production involves motor patterns entail that the infant would have to translate incommensurable information to grasp the perception–production relationship. We posit the more parsimonious view that both domains depend on commensurate articulatory information. Our proposed framework combines principles of the Perceptual Assimilation Model (PAM) and Articulatory Phonology (AP). According to PAM, infants attune to articulatory information in native speech and detect similarities of nonnative phones to native articulatory patterns. The AP premise that gestures of the speech organs are the basic elements of phonology offers articulatory similarity metrics while satisfying the requirement that phonological information be discrete and contrastive: (a) distinct articulatory organs produce vocal tract constrictions and (b) phonological contrasts recruit different articulators and/or constrictions of a given articulator that differ in degree or location. Various lines of research suggest young children perceive articulatory information, which guides their productions: discrimination of between- versus within-organ contrasts, simulations of attunement to language-specific articulatory distributions, multimodal speech perception, oral/vocal imitation, and perceptual effects of articulator activation or suppression. We conclude that articulatory gesture information serves as the foundation for developmental integrality of speech perception and production. PMID:28367052

  3. Auditory and language development in Mandarin-speaking children after cochlear implantation.

    PubMed

    Lu, Xing; Qin, Zhaobing

    2018-04-01

    To evaluate early auditory performance, speech perception and language skills in Mandarin-speaking prelingual deaf children in the first two years after they received a cochlear implant (CI) and analyse the effects of possible associated factors. The Infant-Toddler Meaningful Auditory Integration Scale (ITMAIS)/Meaningful Auditory Integration Scale (MAIS), Mandarin Early Speech Perception (MESP) test and Putonghua Communicative Development Inventory (PCDI) were used to assess auditory and language outcomes in 132 Mandarin-speaking children pre- and post-implantation. Children with CIs exhibited an ITMAIS/MAIS and PCDI developmental trajectory similar to that of children with normal hearing. The increased number of participants who achieved MESP categories 1-6 at each test interval showed a significant improvement in speech perception by paediatric CI recipients. Age at implantation and socioeconomic status were consistently associated with both auditory and language outcomes in the first two years post-implantation. Mandarin-speaking children with CIs exhibit significant improvements in early auditory and language development. Though these improvements followed the normative developmental trajectories, they still exhibited a gap compared with normative values. Earlier implantation and higher socioeconomic status are consistent predictors of greater auditory and language skills in the early stage. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  5. Bilingualism and increased attention to speech: Evidence from event-related potentials.

    PubMed

    Kuipers, Jan Rouke; Thierry, Guillaume

    2015-10-01

    A number of studies have shown that from an early age, bilinguals outperform their monolingual peers on executive control tasks. We previously found that bilingual children and adults also display greater attention to unexpected language switches within speech. Here, we investigated the effect of a bilingual upbringing on speech perception in one language. We recorded monolingual and bilingual toddlers' event-related potentials (ERPs) to spoken words preceded by pictures. Words matching the picture prime elicited an early frontal positivity in bilingual participants only, whereas later ERP amplitudes associated with semantic processing did not differ between groups. These results add to the growing body of evidence that bilingualism increases overall attention during speech perception whilst semantic integration is unaffected. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Experiential instruction in graduate-level preparation of speech-language pathology students in outer and middle ear screening.

    PubMed

    Serpanos, Yula C; Senzer, Deborah

    2015-05-01

    This study presents a piloted training model of experiential instruction in outer and middle ear (OE-ME) screening for graduate speech-language pathology students with peer teaching by doctor of audiology (AuD) students. Six individual experiential training sessions in screening otoscopy and tympanometry were conducted for 36 graduate-level speech-language pathology students led by a supervised AuD student. Postexperiential training, survey outcomes from 24 speech-language pathology students revealed a significant improvement (p = .01) in perceptions of attaining adequate knowledge and comfort in performing screening otoscopy (handheld and video otoscopy) and tympanometry. In a group of matched controls who did not receive experiential training in OE-ME screening (n = 24), ratings on the same learning outcomes survey in otoscopy and tympanometry were significantly poorer (p = .01) compared with students who did receive experiential training. A training model of experiential instruction for speech-language pathology students by AuD students improved learning outcomes, illustrating its promise in affecting clinical practices. The instructional model also meets the Council on Academic Accreditation in Audiology and Speech-Language Pathology (CAA; American Speech-Language-Hearing Association, 2008) and American Speech-Language-Hearing Association (2014) Certificate of Clinical Competence (ASHA CCC) standards for speech-language pathology in OE-ME screening and CAA (2008) and ASHA (2012) CCC standards in the supervisory process for audiology.

  7. Massachusetts School-Based Speech-Language Pathologists' Experiences with and Perceptions of Educator Evaluation

    ERIC Educational Resources Information Center

    Corcoran, Molly A.

    2017-01-01

    Educator evaluation is of significant interest and concern for all members of the national school community. School-based speech-language pathologists (SLPs), share these sentiments with their classroom counterparts. Frequently included in such evaluation systems, it is of concern to the SLP community that research documenting how school-based…

  8. Perception of Melodic Contour and Intonation in Autism Spectrum Disorder: Evidence from Mandarin Speakers

    ERIC Educational Resources Information Center

    Jiang, Jun; Liu, Fang; Wan, Xuan; Jiang, Cunmei

    2015-01-01

    Tone language experience benefits pitch processing in music and speech for typically developing individuals. No known studies have examined pitch processing in individuals with autism who speak a tone language. This study investigated discrimination and identification of melodic contour and speech intonation in a group of Mandarin-speaking…

  9. Effects of Recurrent Otitis Media on Language, Speech, and Educational Achievement in Menominee Indian Children.

    ERIC Educational Resources Information Center

    Thielke, Helen M.; Shriberg, Lawrence D.

    1990-01-01

    Among 28 monolingual English-speaking Menominee Indian children, a history of otitis media was associated with significantly lower scores on measures of language comprehension and speech perception and production at ages 3-5, and on school standardized tests 2 years later. Contains 38 references. (SV)

  10. Development of Hemispheric Specialization for Lexical Pitch-Accent in Japanese Infants

    ERIC Educational Resources Information Center

    Sato, Yutaka; Sogabe, Yuko; Mazuka, Reiko

    2010-01-01

    Infants' speech perception abilities change through the first year of life, from broad sensitivity to a wide range of speech contrasts to becoming more finely attuned to their native language. What remains unclear, however, is how this perceptual change relates to brain responses to native language contrasts in terms of the functional…

  11. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    ERIC Educational Resources Information Center

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  12. An Evaluation of Text-to-Speech Synthesizers in the Foreign Language Classroom: Learners' Perceptions

    ERIC Educational Resources Information Center

    Bione, Tiago; Grimshaw, Jennica; Cardoso, Walcir

    2016-01-01

    As stated in Cardoso, Smith, and Garcia Fuentes (2015), second language researchers and practitioners have explored the pedagogical capabilities of Text-To-Speech synthesizers (TTS) for their potential to enhance the acquisition of writing (e.g. Kirstein, 2006), vocabulary and reading (e.g. Proctor, Dalton, & Grisham, 2007), and pronunciation…

  13. Parent Perceptions of Audiology and Speech-Language Services and Support for Young Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Kelly, Patrick Michael

    2013-01-01

    Parents of children diagnosed with severe-profound sensorineural hearing loss are selecting cochlear implants at an increasing rate and when their children are very young. Audiologists and speech-language pathologists are typically involved in habilitation activities following implantation in an effort to increase children's access to listening…

  14. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    ERIC Educational Resources Information Center

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  15. Teamwork: a study of Australian and US student speech-language pathologists.

    PubMed

    Morrison, Susan C; Lincoln, Michelle A; Reed, Vicki A

    2009-05-01

    In the discipline of speech-language pathology little is known about the explicit and implicit team skills taught within university curricula. This study surveyed 281 speech-language pathology students to determine a baseline of their perceived ability to participate in interprofessional teams. The students were enrolled in programs in Australia and the USA and were surveyed about their perceptions of their attitudes, knowledge and skills in teamwork. MANCOVA analysis for main effects of age, university program and clinical experience showed that age was not significant, negating the perception that life experiences improve perceived team skills. Clinical experience was significant in that students with more clinical experience rated themselves more highly on their team abilities. Post Hoc analysis revealed that Australian students rated themselves higher than their US counterparts on their knowledge about working on teams, but lower on attitudes to teams; all students perceived that they had the skills to work on teams. These results provide insight about teamwork training components in current speech-language pathology curricula. Implications are discussed with reference to enhancing university training programs.

  16. Perception of Native English Reduced Forms in Adverse Environments by Chinese Undergraduate Students

    ERIC Educational Resources Information Center

    Wong, Simpson W. L.; Tsui, Jenny K. Y.; Chow, Bonnie Wing-Yin; Leung, Vina W. H.; Mok, Peggy; Chung, Kevin Kien-Hoa

    2017-01-01

    Previous research has shown that learners of English-as-a-second-language (ESL) have difficulties in understanding connected speech spoken by native English speakers. Extending from past research limited to quiet listening condition, this study examined the perception of English connected speech presented under five adverse conditions, namely…

  17. Musical expertise and second language learning.

    PubMed

    Chobert, Julie; Besson, Mireille

    2013-06-06

    Increasing evidence suggests that musical expertise influences brain organization and brain functions. Moreover, results at the behavioral and neurophysiological levels reveal that musical expertise positively influences several aspects of speech processing, from auditory perception to speech production. In this review, we focus on the main results of the literature that led to the idea that musical expertise may benefit second language acquisition. We discuss several interpretations that may account for the influence of musical expertise on speech processing in native and foreign languages, and we propose new directions for future research.

  18. Musical Expertise and Second Language Learning

    PubMed Central

    Chobert, Julie; Besson, Mireille

    2013-01-01

    Increasing evidence suggests that musical expertise influences brain organization and brain functions. Moreover, results at the behavioral and neurophysiological levels reveal that musical expertise positively influences several aspects of speech processing, from auditory perception to speech production. In this review, we focus on the main results of the literature that led to the idea that musical expertise may benefit second language acquisition. We discuss several interpretations that may account for the influence of musical expertise on speech processing in native and foreign languages, and we propose new directions for future research. PMID:24961431

  19. Cross-Linguistic Differences in Bilinguals' Fundamental Frequency Ranges.

    PubMed

    Ordin, Mikhail; Mennen, Ineke

    2017-06-10

    We investigated cross-linguistic differences in fundamental frequency range (FFR) in Welsh-English bilingual speech. This is the first study that reports gender-specific behavior in switching FFRs across languages in bilingual speech. FFR was conceptualized as a behavioral pattern using measures of span (range of fundamental frequency-in semitones-covered by the speaker's voice) and level (overall height of fundamental frequency maxima, minima, and means of speaker's voice) in each language. FFR measures were taken from recordings of 30 Welsh-English bilinguals (14 women and 16 men), who read 70 semantically matched sentences, 35 in each language. Comparisons were made within speakers across languages, separately in male and female speech. Language background and language use information was elicited for qualitative analysis of extralinguistic factors that might affect the FFR. Cross-linguistic differences in FFR were found to be consistent across female bilinguals but random across male bilinguals. Most female bilinguals showed distinct FFRs for each language. Most male bilinguals, however, were found not to change their FFR when switching languages. Those who did change used different strategies than women when differentiating FFRs between languages. Detected cross-linguistic differences in FFR can be explained by sociocultural factors. Therefore, sociolinguistic factors are to be taken into account in any further study of language-specific pitch setting and cross-linguistic differences in FFR.

  20. Learning to match auditory and visual speech cues: social influences on acquisition of phonological categories.

    PubMed

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential looking paradigm, 44 German 6-month olds' ability to detect mismatches between concurrently presented auditory and visual native vowels was tested. Outcomes were related to mothers' speech style and interactive behavior assessed during free play with their infant, and to infant-specific factors assessed through a questionnaire. Results show that mothers' and infants' social behavior modulated infants' preference for matching audiovisual speech. Moreover, infants' audiovisual speech perception correlated with later vocabulary size, suggesting a lasting effect on language development. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.

  1. Simplification.

    ERIC Educational Resources Information Center

    George, H. V.

    This article discusses language simplification as one aspect of a person's speech activity, and relates simplification to second language learning. Translation from language to language and translation within one language are processes through which a person, as decoder, decontextualizes a message form-sequence through perception of its…

  2. Early speech perception in Mandarin-speaking children at one-year post cochlear implantation.

    PubMed

    Chen, Yuan; Wong, Lena L N; Zhu, Shufeng; Xi, Xin

    2016-01-01

    The aim in this study was to examine early speech perception outcomes in Mandarin-speaking children during the first year of cochlear implant (CI) use. A hierarchical early speech perception battery was administered to 80 children before and 3, 6, and 12 months after implantation. Demographic information was obtained to evaluate its relationship with these outcomes. Regardless of dialect exposure and whether a hearing aid was trialed before implantation, implant recipients were able to attain similar pre-lingual auditory skills after 12 months of CI use. Children speaking Mandarin developed early Mandarin speech perception faster than those with greater exposure to other Chinese dialects. In addition, children with better pre-implant hearing levels and younger age at implantation attained significantly better speech perception scores after 12 months of CI use. Better pre-implant hearing levels and higher maternal education level were also associated with a significantly steeper growth in early speech perception ability. Mandarin-speaking children with CIs are able to attain early speech perception results comparable to those of their English-speaking counterparts. In addition, consistent single language input via CI probably enhances early speech perception development at least during the first-year of CI use. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. The influence of visual and auditory information on the perception of speech and non-speech oral movements in patients with left hemisphere lesions.

    PubMed

    Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram

    2009-03-01

    Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands audiovisual processing both in speech and language treatment and in the diagnosis of oral-facial apraxia. The purpose of this study was to investigate differences in audiovisual perception of speech as compared to non-speech oral gestures. Bimodal and unimodal speech and non-speech items were used and additionally discordant stimuli constructed, which were presented for imitation. This study examined a group of healthy volunteers and a group of patients with lesions of the left hemisphere. Patients made substantially more errors than controls, but the factors influencing imitation accuracy were more or less the same in both groups. Error analyses in both groups suggested different types of representations for speech as compared to the non-speech domain, with speech having a stronger weight on the auditory modality and non-speech processing on the visual modality. Additionally, this study was able to show that the McGurk effect is not limited to speech.

  4. [Prosody, speech input and language acquisition].

    PubMed

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  5. Don’t speak too fast! Processing of fast rate speech in children with specific language impairment

    PubMed Central

    Bedoin, Nathalie; Krifi-Papoz, Sonia; Herbillon, Vania; Caillot-Bascoul, Aurélia; Gonzalez-Monge, Sibylle; Boulenger, Véronique

    2018-01-01

    Background Perception of speech rhythm requires the auditory system to track temporal envelope fluctuations, which carry syllabic and stress information. Reduced sensitivity to rhythmic acoustic cues has been evidenced in children with Specific Language Impairment (SLI), impeding syllabic parsing and speech decoding. Our study investigated whether these children experience specific difficulties processing fast rate speech as compared with typically developing (TD) children. Method Sixteen French children with SLI (8–13 years old) with mainly expressive phonological disorders and with preserved comprehension and 16 age-matched TD children performed a judgment task on sentences produced 1) at normal rate, 2) at fast rate or 3) time-compressed. Sensitivity index (d′) to semantically incongruent sentence-final words was measured. Results Overall children with SLI perform significantly worse than TD children. Importantly, as revealed by the significant Group × Speech Rate interaction, children with SLI find it more challenging than TD children to process both naturally or artificially accelerated speech. The two groups do not significantly differ in normal rate speech processing. Conclusion In agreement with rhythm-processing deficits in atypical language development, our results suggest that children with SLI face difficulties adjusting to rapid speech rate. These findings are interpreted in light of temporal sampling and prosodic phrasing frameworks and of oscillatory mechanisms underlying speech perception. PMID:29373610

  6. A minority perspective in the diagnosis of child language disorders.

    PubMed

    Seymour, H N; Bland, L

    1991-01-01

    The effective diagnosis and treatment of persons from diverse minority language backgrounds has become an important issue in the field of speech and language pathology. Yet, many SLPs have had little or no formal training in minority language, there is a paucity of normative data on language acquisition in minority groups, and there are few standardized speech and language tests appropriate for these groups. We described a diagnostic process that addresses these problems. The diagnostic protocol we have proposed for a child from a Black English-speaking background characterizes many of the major issues in treating minority children. In summary, we proposed four assessment strategies: gathering referral source data; making direct observations; using standardized tests of non-speech and language behavior (cognition, perception, motor, etc.); and eliciting language samples and probes.

  7. The Bilingual Language Interaction Network for Comprehension of Speech

    ERIC Educational Resources Information Center

    Shook, Anthony; Marian, Viorica

    2013-01-01

    During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can…

  8. The Effects of Direct and Indirect Speech Acts on Native English and ESL Speakers' Perception of Teacher Written Feedback

    ERIC Educational Resources Information Center

    Baker, Wendy; Hansen Bricker, Rachel

    2010-01-01

    This study explores how second language (L2) learners perceive indirect (hedging or indirect speech acts) and direct written teacher feedback. Though research suggests that indirect speech acts may be more difficult to interpret than direct speech acts ([Champagne, 2001] and [Holtgraves, 1999]), using indirect speech acts is often encouraged in…

  9. Early Sign Language Exposure and Cochlear Implantation Benefits.

    PubMed

    Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S

    2017-07-01

    Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.

  10. Preparation and Perceptions of Speech-Language Pathologists Working with Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Compton, Mary V.; Tucker, Denise A.; Flynn, Perry F.

    2009-01-01

    This study examined the level of preparedness of North Carolina speech-language pathologists (SLPs) who serve school-aged children with cochlear implants (CIs). A survey distributed to 190 school-based SLPs in North Carolina revealed that 79% of the participants felt they had little to no confidence in managing CI technology or in providing…

  11. Familiarity Breeds Support: Speech-Language Pathologists' Perceptions of Bullying of Students with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Blood, Gordon W.; Blood, Ingrid M.; Coniglio, Amy D.; Finke, Erinn H.; Boyle, Michael P.

    2013-01-01

    Children with autism spectrum disorders (ASD) are primary targets for bullies and victimization. Research shows school personnel may be uneducated about bullying and ways to intervene. Speech-language pathologists (SLPs) in schools often work with children with ASD and may have victims of bullying on their caseloads. These victims may feel most…

  12. The Perceived Importance of Anatomy and Neuroanatomy in the Practice of Speech-Language Pathology

    ERIC Educational Resources Information Center

    Martin, Kate; Bessell, Nicola J.; Scholten, Ingrid

    2013-01-01

    The purpose of this study was to examine the application of anatomy and neuroanatomy knowledge to current practice of speech-language pathology (SLP), based on the perceptions of practicing SLPs, and to elicit information on participants' experiences of learning these subjects in their primary SLP degree with a view to inform potential…

  13. K-5 Educators' Perceptions of the Role of Speech Language Pathologists

    ERIC Educational Resources Information Center

    Hatcher, Karmon D.

    2017-01-01

    Rarely is a school-based speech language pathologist (SLP) thought of as an active contributor to the achievement of students or to the learning community in general. Researchers have found benefits for students when members of the learning community collaborate, and the SLP should be a part of this community collaboration. This qualitative case…

  14. The Effects of Corrective Feedback on Instructed L2 Speech Perception

    ERIC Educational Resources Information Center

    Lee, Andrew H.; Lyster, Roy

    2016-01-01

    To what extent do second language (L2) learners benefit from instruction that includes corrective feedback (CF) on L2 speech perception? This article addresses this question by reporting the results of a classroom-based experimental study conducted with 32 young adult Korean learners of English. An instruction-only group and an instruction + CF…

  15. Acoustic Processing of Temporally Modulated Sounds in Infants: Evidence from a Combined Near-Infrared Spectroscopy and EEG Study

    PubMed Central

    Telkemeyer, Silke; Rossi, Sonja; Nierhaus, Till; Steinbrink, Jens; Obrig, Hellmuth; Wartenburger, Isabell

    2010-01-01

    Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG) and near-infrared spectroscopy (NIRS). NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory-evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language acquisition. PMID:21716574

  16. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    PubMed Central

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  17. Recognizing speech in a novel accent: the motor theory of speech perception reframed.

    PubMed

    Moulin-Frier, Clément; Arbib, Michael A

    2013-08-01

    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory.

  18. The Cross-Cultural Study of Language Acquisition.

    ERIC Educational Resources Information Center

    Heath, Shirley Brice

    1985-01-01

    One approach to studying the nature of diverse speech exchange systems across sociocultural groups starts from the premise that all learning is cultural learning, and that language socialization is the way individuals become members of both their primary speech community and their secondary speech communities. Researchers must recognize that the…

  19. Greek perception and production of an English vowel contrast: A preliminary study

    NASA Astrophysics Data System (ADS)

    Podlipský, Václav J.

    2005-04-01

    This study focused on language-independent principles functioning in acquisition of second language (L2) contrasts. Specifically, it tested Bohn's Desensitization Hypothesis [in Speech perception and linguistic experience: Issues in Cross Language Research, edited by W. Strange (York Press, Baltimore, 1995)] which predicted that Greek speakers of English as an L2 would base their perceptual identification of English /i/ and /I/ on durational differences. Synthetic vowels differing orthogonally in duration and spectrum between the /i/ and /I/ endpoints served as stimuli for a forced-choice identification test. To assess L2 proficiency and to evaluate the possibility of cross-language category assimilation, productions of English /i/, /I/, and /ɛ/ and of Greek /i/ and /e/ were elicited and analyzed acoustically. The L2 utterances were also rated for the degree of foreign accent. Two native speakers of Modern Greek with low and 2 with intermediate experience in English participated. Six native English (NE) listeners and 6 NE speakers tested in an earlier study constituted the control groups. Heterogeneous perceptual behavior was observed for the L2 subjects. It is concluded that until acquisition in completely naturalistic settings is tested, possible interference of formally induced meta-linguistic differentiation between a ``short'' and a ``long'' vowel cannot be eliminated.

  20. Promising Practices in E-Supervision: Exploring Graduate Speech-Language Pathology Interns’ Perceptions

    PubMed Central

    Carlin, Charles H.; Milam, Jennifer L.; Carlin, Emily L.; Owen, Ashley

    2012-01-01

    E-supervision has a potential role in addressing speech-language personnel shortages in rural and difficult to staff school districts. The purposes of this article are twofold: to determine how e-supervision might support graduate speech-language pathologist (SLP) interns placed in rural, remote, and difficult to staff public school districts; and, to investigate interns’ perceptions of in-person supervision compared to e-supervision. The study used a mixed methodology approach and collected data from surveys, supervision documents and records, and interviews. The results showed the use of e-supervision allowed graduate SLP interns to be adequately supervised across a variety of clients and professional activities in a manner that was similar to in-person supervision. Further, e-supervision was perceived as a more convenient and less stressful supervision format when compared to in-person supervision. Other findings are discussed and implications and limitations provided. PMID:25945201

  1. Effects of Early Bilingual Experience with a Tone and a Non-Tone Language on Speech-Music Integration

    PubMed Central

    Asaridou, Salomi S.; Hagoort, Peter; McQueen, James M.

    2015-01-01

    We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly. PMID:26659377

  2. Effects of Early Bilingual Experience with a Tone and a Non-Tone Language on Speech-Music Integration.

    PubMed

    Asaridou, Salomi S; Hagoort, Peter; McQueen, James M

    2015-01-01

    We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.

  3. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    PubMed

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  4. Differences in the Association between Segment and Language: Early Bilinguals Pattern with Monolinguals and Are Less Accurate than Late Bilinguals

    PubMed Central

    Blanco, Cynthia P.; Bannard, Colin; Smiljanic, Rajka

    2016-01-01

    Early bilinguals often show as much sensitivity to L2-specific contrasts as monolingual speakers of the L2, but most work on cross-language speech perception has focused on isolated segments, and typically only on neighboring vowels or stop contrasts. In tasks that include sounds in context, listeners’ success is more variable, so segment discrimination in isolation may not adequately represent the phonetic detail in stored representations. The current study explores the relationship between language experience and sensitivity to segmental cues in context by comparing the categorization patterns of monolingual English listeners and early and late Spanish–English bilinguals. Participants categorized nonce words containing different classes of English- and Spanish-specific sounds as being more English-like or more Spanish-like; target segments included phonemic cues, cues for which there is no analogous sound in the other language, or phonetic cues, cues for which English and Spanish share the category but for which each language varies in its phonetic implementation. Listeners’ language categorization accuracy and reaction times were analyzed. Our results reveal a largely uniform categorization pattern across listener groups: Spanish cues were categorized more accurately than English cues, and phonemic cues were easier for listeners to categorize than phonetic cues. There were no differences in the sensitivity of monolinguals and early bilinguals to language-specific cues, suggesting that the early bilinguals’ exposure to Spanish did not fundamentally change their representations of English phonology. However, neither did the early bilinguals show more sensitivity than the monolinguals to Spanish sounds. The late bilinguals however, were significantly more accurate than either of the other groups. These findings indicate that listeners with varying exposure to English and Spanish are able to use language-specific cues in a nonce-word language categorization task. Differences in how, and not only when, a language was acquired may influence listener sensitivity to more difficult cues, and the advantage for phonemic cues may reflect the greater salience of categories unique to each language. Implications for foreign-accent categorization and cross-language speech perception are discussed, and future directions are outlined to better understand how salience varies across language-specific phonemic and phonetic cues. PMID:27445947

  5. Sounds for Study: Speech and Language Therapy Students' Use and Perception of Exercise Podcasts for Phonetics

    ERIC Educational Resources Information Center

    Knight, Rachael-Anne

    2010-01-01

    Currently little is known about how students use podcasts of exercise material (as opposed to lecture material), and whether they perceive such podcasts to be beneficial. This study aimed to assess how exercise podcasts for phonetics are used and perceived by second year speech and language therapy students. Eleven podcasts of graded phonetics…

  6. Speech Perception and Production by Sequential Bilingual Children: A Longitudinal Study of Voice Onset Time Acquisition

    ERIC Educational Resources Information Center

    McCarthy, Kathleen M.; Mahon, Merle; Rosen, Stuart; Evans, Bronwen G.

    2014-01-01

    The majority of bilingual speech research has focused on simultaneous bilinguals. Yet, in immigrant communities, children are often initially exposed to their family language (L1), before becoming gradually immersed in the host country's language (L2). This is typically referred to as sequential bilingualism. Using a longitudinal design, this…

  7. Team-Based Learning in a Capstone Course in Speech-Language Pathology: Learning Outcomes and Student Perceptions

    ERIC Educational Resources Information Center

    Wallace, Sarah E.

    2015-01-01

    Team-based learning (TBL), although found to increase student engagement and higher-level thinking, has not been examined in the field of speech-language pathology. The purpose of this study was to examine the effect of integrating TBL into a capstone course in evidence-based practice (EBP). The researcher evaluated 27 students' understanding of…

  8. Listening with an Accent: Speech Perception in a Second Language by Late Bilinguals

    ERIC Educational Resources Information Center

    Leikin, Mark; Ibrahim, Raphiq; Eviatar, Zohar; Sapir, Shimon

    2009-01-01

    The goal of the present study was to examine functioning of late bilinguals in their second language. Specifically, we asked how native and non-native Hebrew speaking listeners perceive accented and native-accented Hebrew speech. To achieve this goal we used the gating paradigm to explore the ability of healthy late fluent bilinguals (Russian and…

  9. Age Effects in First Language Attrition: Speech Perception by Korean-English Bilinguals

    ERIC Educational Resources Information Center

    Ahn, Sunyoung; Chang, Charles B.; DeKeyser, Robert; Lee-Ellis, Sunyoung

    2017-01-01

    This study investigated how bilinguals' perception of their first language (L1) differs according to age of reduced contact with L1 after immersion in a second language (L2). Twenty-one L1 Korean-L2 English bilinguals in the United States, ranging in age of reduced contact from 3 to 15 years, and 17 control participants in Korea were tested…

  10. Perception of initial obstruent voicing is influenced by gestural organization

    PubMed Central

    Best, Catherine T.; Hallé, Pierre A.

    2009-01-01

    Cross-language differences in phonetic settings for phonological contrasts of stop voicing have posed a challenge for attempts to relate specific phonological features to specific phonetic details. We probe the phonetic-phonological relationship for voicing contrasts more broadly, analyzing in particular their relevance to nonnative speech perception, from two theoretical perspectives: feature geometry and articulatory phonology. Because these perspectives differ in assumptions about temporal/phasing relationships among features/gestures within syllable onsets, we undertook a cross-language investigation on perception of obstruent (stop, fricative) voicing contrasts in three nonnative onsets that use a common set of features/gestures but with differing time-coupling. Listeners of English and French, which differ in their phonetic settings for word-initial stop voicing distinctions, were tested on perception of three onset types, all nonnative to both English and French, that differ in how initial obstruent voicing is coordinated with a lateral feature/gesture and additional obstruent features/gestures. The targets, listed from least complex to most complex onsets, were: a lateral fricative voicing distinction (Zulu /ɬ/-ɮ/), a laterally-released affricate voicing distinction (Tlingit /tɬ/-/dɮ/), and a coronal stop voicing distinction in stop+/l/ clusters (Hebrew /tl/-/dl/). English and French listeners' performance reflected the differences in their native languages' stop voicing distinctions, compatible with prior perceptual studies on singleton consonant onsets. However, both groups' abilities to perceive voicing as a separable parameter also varied systematically with the structure of the target onsets, supporting the notion that the gestural organization of syllable onsets systematically affects perception of initial voicing distinctions. PMID:20228878

  11. Motor excitability during visual perception of known and unknown spoken languages.

    PubMed

    Swaminathan, Swathi; MacSweeney, Mairéad; Boyles, Rowan; Waters, Dafydd; Watkins, Kate E; Möttönen, Riikka

    2013-07-01

    It is possible to comprehend speech and discriminate languages by viewing a speaker's articulatory movements. Transcranial magnetic stimulation studies have shown that viewing speech enhances excitability in the articulatory motor cortex. Here, we investigated the specificity of this enhanced motor excitability in native and non-native speakers of English. Both groups were able to discriminate between speech movements related to a known (i.e., English) and unknown (i.e., Hebrew) language. The motor excitability was higher during observation of a known language than an unknown language or non-speech mouth movements, suggesting that motor resonance is enhanced specifically during observation of mouth movements that convey linguistic information. Surprisingly, however, the excitability was equally high during observation of a static face. Moreover, the motor excitability did not differ between native and non-native speakers. These findings suggest that the articulatory motor cortex processes several kinds of visual cues during speech communication. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  12. Ira Hirsh and oral deaf education: The role of audition in language development

    NASA Astrophysics Data System (ADS)

    Geers, Ann

    2002-05-01

    Prior to the 1960s, the teaching of speech to deaf children consisted primarily of instruction in lip reading and tactile perception accompanied by imitative exercises in speech sound production. Hirsh came to Central Institute for the Deaf with an interest in discovering the auditory capabilities of normal-hearing listeners. This interest led him to speculate that more normal speech development could be encouraged in deaf children by maximizing use of their limited residual hearing. Following the tradition of Max Goldstein, Edith Whetnall, and Dennis Fry, Hirsh gave scientific validity to the use of amplified speech as the primary avenue to oral language development in prelingually deaf children. This ``auditory approach,'' combined with an emphasis on early intervention, formed the basis for auditory-oral education as we know it today. This presentation will examine how the speech perception, language, and reading skills of prelingually deaf children have changed as a result of improvements in auditory technology that have occurred over the past 30 years. Current data from children using cochlear implants will be compared with data collected earlier from children with profound hearing loss who used hearing aids. [Work supported by NIH.

  13. Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech.

    PubMed

    Borrie, Stephanie A; Lansford, Kaitlin L; Barrett, Tyson S

    2017-03-01

    The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria. Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words. Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement. Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.

  14. Awareness of Rhythm Patterns in Speech and Music in Children with Specific Language Impairments

    PubMed Central

    Cumming, Ruth; Wilson, Angela; Leong, Victoria; Colling, Lincoln J.; Goswami, Usha

    2015-01-01

    Children with specific language impairments (SLIs) show impaired perception and production of language, and also show impairments in perceiving auditory cues to rhythm [amplitude rise time (ART) and sound duration] and in tapping to a rhythmic beat. Here we explore potential links between language development and rhythm perception in 45 children with SLI and 50 age-matched controls. We administered three rhythmic tasks, a musical beat detection task, a tapping-to-music task, and a novel music/speech task, which varied rhythm and pitch cues independently or together in both speech and music. Via low-pass filtering, the music sounded as though it was played from a low-quality radio and the speech sounded as though it was muffled (heard “behind the door”). We report data for all of the SLI children (N = 45, IQ varying), as well as for two independent subgroupings with intact IQ. One subgroup, “Pure SLI,” had intact phonology and reading (N = 16), the other, “SLI PPR” (N = 15), had impaired phonology and reading. When IQ varied (all SLI children), we found significant group differences in all the rhythmic tasks. For the Pure SLI group, there were rhythmic impairments in the tapping task only. For children with SLI and poor phonology (SLI PPR), group differences were found in all of the filtered speech/music AXB tasks. We conclude that difficulties with rhythmic cues in both speech and music are present in children with SLIs, but that some rhythmic measures are more sensitive than others. The data are interpreted within a “prosodic phrasing” hypothesis, and we discuss the potential utility of rhythmic and musical interventions in remediating speech and language difficulties in children. PMID:26733848

  15. Awareness of Rhythm Patterns in Speech and Music in Children with Specific Language Impairments.

    PubMed

    Cumming, Ruth; Wilson, Angela; Leong, Victoria; Colling, Lincoln J; Goswami, Usha

    2015-01-01

    Children with specific language impairments (SLIs) show impaired perception and production of language, and also show impairments in perceiving auditory cues to rhythm [amplitude rise time (ART) and sound duration] and in tapping to a rhythmic beat. Here we explore potential links between language development and rhythm perception in 45 children with SLI and 50 age-matched controls. We administered three rhythmic tasks, a musical beat detection task, a tapping-to-music task, and a novel music/speech task, which varied rhythm and pitch cues independently or together in both speech and music. Via low-pass filtering, the music sounded as though it was played from a low-quality radio and the speech sounded as though it was muffled (heard "behind the door"). We report data for all of the SLI children (N = 45, IQ varying), as well as for two independent subgroupings with intact IQ. One subgroup, "Pure SLI," had intact phonology and reading (N = 16), the other, "SLI PPR" (N = 15), had impaired phonology and reading. When IQ varied (all SLI children), we found significant group differences in all the rhythmic tasks. For the Pure SLI group, there were rhythmic impairments in the tapping task only. For children with SLI and poor phonology (SLI PPR), group differences were found in all of the filtered speech/music AXB tasks. We conclude that difficulties with rhythmic cues in both speech and music are present in children with SLIs, but that some rhythmic measures are more sensitive than others. The data are interpreted within a "prosodic phrasing" hypothesis, and we discuss the potential utility of rhythmic and musical interventions in remediating speech and language difficulties in children.

  16. Impact of language on development of auditory-visual speech perception.

    PubMed

    Sekiyama, Kaoru; Burnham, Denis

    2008-03-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.

  17. Unequal effects of speech and nonspeech contexts on the perceptual normalization of Cantonese level tones.

    PubMed

    Zhang, Caicai; Peng, Gang; Wang, William S-Y

    2012-08-01

    Context is important for recovering language information from talker-induced variability in acoustic signals. In tone perception, previous studies reported similar effects of speech and nonspeech contexts in Mandarin, supporting a general perceptual mechanism underlying tone normalization. However, no supportive evidence was obtained in Cantonese, also a tone language. Moreover, no study has compared speech and nonspeech contexts in the multi-talker condition, which is essential for exploring the normalization mechanism of inter-talker variability in speaking F0. The other question is whether a talker's full F0 range and mean F0 equally facilitate normalization. To answer these questions, this study examines the effects of four context conditions (speech/nonspeech × F0 contour/mean F0) in the multi-talker condition in Cantonese. Results show that raising and lowering the F0 of speech contexts change the perception of identical stimuli from mid level tone to low and high level tone, whereas nonspeech contexts only mildly increase the identification preference. It supports the speech-specific mechanism of tone normalization. Moreover, speech context with flattened F0 trajectory, which neutralizes cues of a talker's full F0 range, fails to facilitate normalization in some conditions, implying that a talker's mean F0 is less efficient for minimizing talker-induced lexical ambiguity in tone perception.

  18. The Influence of Visual and Auditory Information on the Perception of Speech and Non-Speech Oral Movements in Patients with Left Hemisphere Lesions

    ERIC Educational Resources Information Center

    Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram

    2009-01-01

    Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands…

  19. The speech perception skills of children with and without speech sound disorder.

    PubMed

    Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie

    To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment

    PubMed Central

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed. PMID:24904454

  1. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment.

    PubMed

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

  2. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review

    PubMed Central

    Schomers, Malte R.; Pulvermüller, Friedemann

    2016-01-01

    In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding. PMID:27708566

  3. Brainstem Transcription of Speech Is Disrupted in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Russo, Nicole; Nicol, Trent; Trommer, Barbara; Zecker, Steve; Kraus, Nina

    2009-01-01

    Language impairment is a hallmark of autism spectrum disorders (ASD). The origin of the deficit is poorly understood although deficiencies in auditory processing have been detected in both perception and cortical encoding of speech sounds. Little is known about the processing and transcription of speech sounds at earlier (brainstem) levels or…

  4. The Neural Basis of Speech Perception through Lipreading and Manual Cues: Evidence from Deaf Native Users of Cued Speech

    PubMed Central

    Aparicio, Mario; Peigneux, Philippe; Charlier, Brigitte; Balériaux, Danielle; Kavec, Martin; Leybaert, Jacqueline

    2017-01-01

    We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl’s gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication. PMID:28424636

  5. Processing of Audiovisually Congruent and Incongruent Speech in School-Age Children with a History of Specific Language Impairment: A Behavioral and Event-Related Potentials Study

    ERIC Educational Resources Information Center

    Kaganovich, Natalya; Schumaker, Jennifer; Macias, Danielle; Gustafson, Dana

    2015-01-01

    Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we…

  6. Voice input/output capabilities at Perception Technology Corporation

    NASA Technical Reports Server (NTRS)

    Ferber, Leon A.

    1977-01-01

    Condensed resumes of key company personnel at the Perception Technology Corporation are presented. The staff possesses recognition, speech synthesis, speaker authentication, and language identification. Hardware and software engineers' capabilities are included.

  7. Top–Down Modulation on the Perception and Categorization of Identical Pitch Contours in Speech and Music

    PubMed Central

    Weidema, Joey L.; Roncaglia-Denissen, M. P.; Honing, Henkjan

    2016-01-01

    Whether pitch in language and music is governed by domain-specific or domain-general cognitive mechanisms is contentiously debated. The aim of the present study was to investigate whether mechanisms governing pitch contour perception operate differently when pitch information is interpreted as either speech or music. By modulating listening mode, this study aspired to demonstrate that pitch contour perception relies on domain-specific cognitive mechanisms, which are regulated by top–down influences from language and music. Three groups of participants (Mandarin speakers, Dutch speaking non-musicians, and Dutch musicians) were exposed to identical pitch contours, and tested on their ability to identify these contours in a language and musical context. Stimuli consisted of disyllabic words spoken in Mandarin, and melodic tonal analogs, embedded in a linguistic and melodic carrier phrase, respectively. Participants classified identical pitch contours as significantly different depending on listening mode. Top–down influences from language appeared to alter the perception of pitch contour in speakers of Mandarin. This was not the case for non-musician speakers of Dutch. Moreover, this effect was lacking in Dutch speaking musicians. The classification patterns of pitch contours in language and music seem to suggest that domain-specific categorization is modulated by top–down influences from language and music. PMID:27313552

  8. The Ear Is Connected to the Brain: Some New Directions in the Study of Children with Cochlear Implants at Indiana University

    PubMed Central

    Houston, Derek M.; Beer, Jessica; Bergeson, Tonya R.; Chin, Steven B.; Pisoni, David B.; Miyamoto, Richard T.

    2012-01-01

    Since the early 1980s, the DeVault Otologic Research Laboratory at the Indiana University School of Medicine has been on the forefront of research on speech and language outcomes in children with cochlear implants. This paper highlights work over the last decade that has moved beyond collecting speech and language outcome measures to focus more on investigating the underlying cognitive, social, and linguistic skills that predict speech and language outcomes. This recent work reflects our growing appreciation that early auditory deprivation can affect more than hearing and speech perception. The new directions include research on attention to speech, word learning, phonological development, social development, and neurocognitive processes. We have also expanded our subject populations to include infants and children with additional disabilities PMID:22668765

  9. Speech perception in autism spectrum disorder: An activation likelihood estimation meta-analysis.

    PubMed

    Tryfon, Ana; Foster, Nicholas E V; Sharda, Megha; Hyde, Krista L

    2018-02-15

    Autism spectrum disorder (ASD) is often characterized by atypical language profiles and auditory and speech processing. These can contribute to aberrant language and social communication skills in ASD. The study of the neural basis of speech perception in ASD can serve as a potential neurobiological marker of ASD early on, but mixed results across studies renders it difficult to find a reliable neural characterization of speech processing in ASD. To this aim, the present study examined the functional neural basis of speech perception in ASD versus typical development (TD) using an activation likelihood estimation (ALE) meta-analysis of 18 qualifying studies. The present study included separate analyses for TD and ASD, which allowed us to examine patterns of within-group brain activation as well as both common and distinct patterns of brain activation across the ASD and TD groups. Overall, ASD and TD showed mostly common brain activation of speech processing in bilateral superior temporal gyrus (STG) and left inferior frontal gyrus (IFG). However, the results revealed trends for some distinct activation in the TD group showing additional activation in higher-order brain areas including left superior frontal gyrus (SFG), left medial frontal gyrus (MFG), and right IFG. These results provide a more reliable neural characterization of speech processing in ASD relative to previous single neuroimaging studies and motivate future work to investigate how these brain signatures relate to behavioral measures of speech processing in ASD. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Evidence-based guidelines for recommending cochlear implantation for young children: Audiological criteria and optimizing age at implantation.

    PubMed

    Leigh, Jaime R; Dettman, Shani J; Dowell, Richard C

    2016-01-01

    Establish up-to-date evidence-based guidelines for recommending cochlear implantation for young children. Speech perception results for early-implanted children were compared to children using traditional amplification. Equivalent pure-tone average (PTA) hearing loss for cochlear implant (CI) users was established. Language of early-implanted children was assessed over six years and compared to hearing peers. Seventy-eight children using CIs and 62 children using traditional amplification with hearing losses ranging 25-120 dB HL PTA (speech perception study). Thirty-two children who received a CI before 2.5 years of age (language study). Speech perception outcomes suggested that children with a PTA greater than 60 dB HL have a 75% chance of benefit over traditional amplification. More conservative criteria applied to the data suggested that children with PTA greater than 82 dB HL have a 95% chance of benefit. Children implanted under 2.5 years with no significant cognitive deficits made normal language progress but retained a delay approximately equal to their age at implantation. Hearing-impaired children under three years of age may benefit from cochlear implantation if their PTA exceeds 60 dB HL bilaterally. Implantation as young as possible should minimize any language delay resulting from an initial period of auditory deprivation.

  11. Communication Disorders in Speakers of Tone Languages: Etiological Bases and Clinical Considerations

    PubMed Central

    Wong, Patrick C. M.; Perrachione, Tyler K.; Gunasekera, Geshri; Chandrasekaran, Bharath

    2009-01-01

    Lexical tones are a phonetic contrast necessary for conveying meaning in a majority of the world’s languages. Various hearing, speech, and language disorders affect the ability to perceive or produce lexical tones, thereby seriously impairing individuals’ communicative abilities. The number of tone language speakers is increasing, even in otherwise English-speaking nations, yet insufficient emphasis has been placed on clinical assessment and rehabilitation of lexical tone disorders. The similarities and dissimilarities between lexical tones and other speech sounds make a richer scientific understanding of their physiological bases paramount to more effective remediation of speech and language disorders in general. Here we discuss the cognitive and biological bases of lexical tones, emphasizing the neural structures and networks that support their acquisition, perception, and cognitive representation. We present emerging research on lexical tone learning in the context of the clinical disorders of hearing, speech, and language that this body of research will help to address. PMID:19711234

  12. Speech, language, and cognitive dysfunction in children with focal epileptiform activity: A follow-up study.

    PubMed

    Rejnö-Habte Selassie, Gunilla; Hedström, Anders; Viggedal, Gerd; Jennische, Margareta; Kyllerman, Mårten

    2010-07-01

    We reviewed the medical history, EEG recordings, and developmental milestones of 19 children with speech and language dysfunction and focal epileptiform activity. Speech, language, and neuropsychological assessments and EEG recordings were performed at follow-up, and prognostic indicators were analyzed. Three patterns of language development were observed: late start and slow development, late start and deterioration/regression, and normal start and later regression/deterioration. No differences in test results among these groups were seen, indicating a spectrum of related conditions including Landau-Kleffner syndrome and epileptic language disorder. More than half of the participants had speech and language dysfunction at follow-up. IQ levels, working memory, and processing speed were also affected. Dysfunction of auditory perception in noise was found in more than half of the participants, and dysfunction of auditory attention in all. Dysfunction of communication, oral motor ability, and stuttering were noted in a few. Family history of seizures and abundant epileptiform activity indicated a worse prognosis. Copyright 2010 Elsevier Inc. All rights reserved.

  13. Cross-Cultural Learning: The Language Connection.

    ERIC Educational Resources Information Center

    Axelrod, Joseph

    1981-01-01

    If foreign language acquisition is disconnected from the cultural life of the foreign speech community, the learning yield is low. Integration of affective learning, cultural learning, and foreign language learning are essential to a successful cross-cultural experience. (MSE)

  14. Brain Mechanisms Underlying Speech and Language; Conference Proceedings (Princeton, New Jersey, November 9-12, 1965).

    ERIC Educational Resources Information Center

    Darley, Frederic L., Ed.

    The conference proceedings of scientists specializing in language processes and neurophysiological mechanisms are reported to stimulate a cross-over of interest and research in the central brain phenomena (reception, understanding, retention, integration, formulation, and expression) as they relate to speech and language. Eighteen research reports…

  15. Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant.

    PubMed

    Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa

    2016-07-01

    The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied.

  16. Clinical and Educational Perspectives on Language Intervention for Children with Autism.

    ERIC Educational Resources Information Center

    Kamhi, Alan G.; And Others

    The paper examines aspects of effective language intervention with autistic children. An overview is presented about the nature of language, its perception and comprehension, and the production of speech-language. Assessment strategies are considered. The second part of the paper analyzes traditional and communications-based intervention programs.…

  17. Speech perception and reading: two parallel modes of understanding language and implications for acquiring literacy naturally.

    PubMed

    Massaro, Dominic W

    2012-01-01

    I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.

  18. How may the basal ganglia contribute to auditory categorization and speech perception?

    PubMed Central

    Lim, Sung-Joo; Fiez, Julie A.; Holt, Lori L.

    2014-01-01

    Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood. PMID:25136291

  19. Recommendations for elaboration, transcultural adaptation and validation process of tests in Speech, Hearing and Language Pathology.

    PubMed

    Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de

    2017-06-08

    to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.

  20. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers.

    PubMed

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers' performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.

  1. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

    PubMed Central

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A.; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning. PMID:29255435

  2. Leveraging Automatic Speech Recognition Errors to Detect Challenging Speech Segments in TED Talks

    ERIC Educational Resources Information Center

    Mirzaei, Maryam Sadat; Meshgi, Kourosh; Kawahara, Tatsuya

    2016-01-01

    This study investigates the use of Automatic Speech Recognition (ASR) systems to epitomize second language (L2) listeners' problems in perception of TED talks. ASR-generated transcripts of videos often involve recognition errors, which may indicate difficult segments for L2 listeners. This paper aims to discover the root-causes of the ASR errors…

  3. Clinicians' perspectives of therapeutic alliance in face-to-face and telepractice speech-language pathology sessions.

    PubMed

    Freckmann, Anneka; Hines, Monique; Lincoln, Michelle

    2017-06-01

    To investigate the face validity of a measure of therapeutic alliance for paediatric speech-language pathology and to determine whether a difference exists in therapeutic alliance reported by speech-language pathologists (SLPs) conducting face-to-face sessions, compared with telepractice SLPs or in their ratings of confidence with technology. SLPs conducting telepractice (n = 14) or face-to-face therapy (n = 18) completed an online survey which included the Therapeutic Alliance Scales for Children - Revised (TASC-r) (Therapist Form) to rate clinicians' perceptions of rapport with up to three clients. Participants also reported their overall perception of rapport with each client and their comfort with technology. There was a strong correlation between TASC-r total scores and overall ratings of rapport, providing preliminary evidence of TASC-r face validity. There was no significant difference between TASC-r scores for telepractice and face-to-face therapy (p = 0.961), nor face-to-face and telepractice SLPs' confidence with familiar (p = 0.414) or unfamiliar technology (p = 0.780). The TASC-r may be a promising tool for measuring therapeutic alliance in speech-language pathology. Telepractice does not appear to have a negative effect on rapport between SLPs and paediatric clients. Future research is required to identify how SLPs develop rapport in telepractice.

  4. Co-occurrence statistics as a language-dependent cue for speech segmentation.

    PubMed

    Saksida, Amanda; Langus, Alan; Nespor, Marina

    2017-05-01

    To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language-specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We investigate cross-linguistic viability of different statistical learning strategies by analyzing child-directed speech corpora from nine languages and by modeling possible statistics-based speech segmentations. We show that languages vary as to which statistical segmentation strategies are most successful. The variability of the results can be partially explained by systematic differences between languages, such as rhythmical differences. The results confirm previous findings that different statistical learning strategies are successful in different languages and suggest that infants may have to primarily rely on non-statistical cues when they begin their process of speech segmentation. © 2016 John Wiley & Sons Ltd.

  5. Cross-Language Differences in Informational Masking of Speech by Speech: English versus Mandarin Chinese

    ERIC Educational Resources Information Center

    Wu, Xihong; Yang, Zhigang; Huang, Ying; Chen, Jing; Li, Liang; Daneman, Meredyth; Schneider, Bruce A.

    2011-01-01

    Purpose: The purpose of the study was to determine why perceived spatial separation provides a greater release from informational masking in Chinese than English when target sentences in each of the languages are masked by other talkers speaking the same language. Method: Monolingual speakers of English and Mandarin Chinese listened to…

  6. Cross-Language Activation Begins during Speech Planning and Extends into Second Language Speech

    ERIC Educational Resources Information Center

    Jacobs, April; Fricke, Melinda; Kroll, Judith F.

    2016-01-01

    Three groups of native English speakers named words aloud in Spanish, their second language (L2). Intermediate proficiency learners in a classroom setting (Experiment 1) and in a domestic immersion program (Experiment 2) were compared to a group of highly proficient English-Spanish speakers. All three groups named cognate words more quickly and…

  7. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    PubMed

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Understanding Language: An Information-Processing Analysis of Speech Perception, Reading, and Psycholinguistics.

    ERIC Educational Resources Information Center

    Massaro, Dominic W., Ed.

    In an information-processing approach to language processing, language processing is viewed as a sequence of psychological stages that occur between the initial presentation of the language stimulus and the meaning in the mind of the language processor. This book defines each of the processes and structures involved, explains how each of them…

  9. Early Sign Language Experience Goes Along with an Increased Cross-modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users.

    PubMed

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-04-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.

  10. Parent and Teacher Perceptions of Participation and Outcomes in an Intensive Communication Intervention for Children with Pragmatic Language Impairment

    ERIC Educational Resources Information Center

    Baxendale, Janet; Lockton, Elaine; Adams, Catherine; Gaile, Jacqueline

    2013-01-01

    Background: Treatment trials that enquire into parents' and teachers' views on speech-language interventions and outcomes for primary school-age children are relatively rare. The current study sought perceptions of the process of intervention and value placed on outcomes resulting from a trial of intervention, the Social Communication Intervention…

  11. Voice recognition through phonetic features with Punjabi utterances

    NASA Astrophysics Data System (ADS)

    Kaur, Jasdeep; Juglan, K. C.; Sharma, Vishal; Upadhyay, R. K.

    2017-07-01

    This paper deals with perception and disorders of speech in view of Punjabi language. Visualizing the importance of voice identification, various parameters of speaker identification has been studied. The speech material was recorded with a tape recorder in their normal and disguised mode of utterances. Out of the recorded speech materials, the utterances free from noise, etc were selected for their auditory and acoustic spectrographic analysis. The comparison of normal and disguised speech of seven subjects is reported. The fundamental frequency (F0) at similar places, Plosive duration at certain phoneme, Amplitude ratio (A1:A2) etc. were compared in normal and disguised speech. It was found that the formant frequency of normal and disguised speech remains almost similar only if it is compared at the position of same vowel quality and quantity. If the vowel is more closed or more open in the disguised utterance the formant frequency will be changed in comparison to normal utterance. The ratio of the amplitude (A1: A2) is found to be speaker dependent. It remains unchanged in the disguised utterance. However, this value may shift in disguised utterance if cross sectioning is not done at the same location.

  12. Speech Perception and Production by Sequential Bilingual Children: A Longitudinal Study of Voice Onset Time Acquisition

    PubMed Central

    McCarthy, Kathleen M; Mahon, Merle; Rosen, Stuart; Evans, Bronwen G

    2014-01-01

    The majority of bilingual speech research has focused on simultaneous bilinguals. Yet, in immigrant communities, children are often initially exposed to their family language (L1), before becoming gradually immersed in the host country's language (L2). This is typically referred to as sequential bilingualism. Using a longitudinal design, this study explored the perception and production of the English voicing contrast in 55 children (40 Sylheti-English sequential bilinguals and 15 English monolinguals). Children were tested twice: when they were in nursery (52-month-olds) and 1 year later. Sequential bilinguals' perception and production of English plosives were initially driven by their experience with their L1, but after starting school, changed to match that of their monolingual peers. PMID:25123987

  13. Neural correlates of audiovisual speech processing in a second language.

    PubMed

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. A Hierarchical Generative Framework of Language Processing: Linking Language Perception, Interpretation, and Production Abnormalities in Schizophrenia

    PubMed Central

    Brown, Meredith; Kuperberg, Gina R.

    2015-01-01

    Language and thought dysfunction are central to the schizophrenia syndrome. They are evident in the major symptoms of psychosis itself, particularly as disorganized language output (positive thought disorder) and auditory verbal hallucinations (AVHs), and they also manifest as abnormalities in both high-level semantic and contextual processing and low-level perception. However, the literatures characterizing these abnormalities have largely been separate and have sometimes provided mutually exclusive accounts of aberrant language in schizophrenia. In this review, we propose that recent generative probabilistic frameworks of language processing can provide crucial insights that link these four lines of research. We first outline neural and cognitive evidence that real-time language comprehension and production normally involve internal generative circuits that propagate probabilistic predictions to perceptual cortices — predictions that are incrementally updated based on prediction error signals as new inputs are encountered. We then explain how disruptions to these circuits may compromise communicative abilities in schizophrenia by reducing the efficiency and robustness of both high-level language processing and low-level speech perception. We also argue that such disruptions may contribute to the phenomenology of thought-disordered speech and false perceptual inferences in the language system (i.e., AVHs). This perspective suggests a number of productive avenues for future research that may elucidate not only the mechanisms of language abnormalities in schizophrenia, but also promising directions for cognitive rehabilitation. PMID:26640435

  15. Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech

    ERIC Educational Resources Information Center

    Borrie, Stephanie A.; Lansford, Kaitlin L.; Barrett, Tyson S.

    2017-01-01

    Purpose: The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception…

  16. Outcomes of cochlear implantation in deaf children of deaf parents: comparative study.

    PubMed

    Hassanzadeh, S

    2012-10-01

    This retrospective study compared the cochlear implantation outcomes of first- and second-generation deaf children. The study group consisted of seven deaf, cochlear-implanted children with deaf parents. An equal number of deaf children with normal-hearing parents were selected by matched sampling as a reference group. Participants were matched based on onset and severity of deafness, duration of deafness, age at cochlear implantation, duration of cochlear implantation, gender, and cochlear implant model. We used the Persian Auditory Perception Test for the Hearing Impaired, the Speech Intelligibility Rating scale, and the Sentence Imitation Test, in order to measure participants' speech perception, speech production and language development, respectively. Both groups of children showed auditory and speech development. However, the second-generation deaf children (i.e. deaf children of deaf parents) exceeded the cochlear implantation performance of the deaf children with hearing parents. This study confirms that second-generation deaf children exceed deaf children of hearing parents in terms of cochlear implantation performance. Encouraging deaf children to communicate in sign language from a very early age, before cochlear implantation, appears to improve their ability to learn spoken language after cochlear implantation.

  17. Children with a cochlear implant: characteristics and determinants of speech recognition, speech-recognition growth rate, and speech production.

    PubMed

    Wie, Ona Bø; Falkenberg, Eva-Signe; Tvete, Ole; Tomblin, Bruce

    2007-05-01

    The objectives of the study were to describe the characteristics of the first 79 prelingually deaf cochlear implant users in Norway and to investigate to what degree the variation in speech recognition, speech- recognition growth rate, and speech production could be explained by the characteristics of the child, the cochlear implant, the family, and the educational setting. Data gathered longitudinally were analysed using descriptive statistics, multiple regression, and growth-curve analysis. The results show that more than 50% of the variation could be explained by these characteristics. Daily user-time, non-verbal intelligence, mode of communication, length of CI experience, and educational placement had the highest effect on the outcome. The results also indicate that children educated in a bilingual approach to education have better speech perception and faster speech perception growth rate with increased focus on spoken language.

  18. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception.

    PubMed

    Skipper, Jeremy I; Devlin, Joseph T; Lametti, Daniel R

    2017-01-01

    Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires. Copyright © 2016. Published by Elsevier Inc.

  19. Hearing loss and speech perception in noise difficulties in Fanconi anemia.

    PubMed

    Verheij, Emmy; Oomen, Karin P Q; Smetsers, Stephanie E; van Zanten, Gijsbert A; Speleman, Lucienne

    2017-10-01

    Fanconi anemia is a hereditary chromosomal instability disorder. Hearing loss and ear abnormalities are among the many manifestations reported in this disorder. In addition, Fanconi anemia patients often complain about hearing difficulties in situations with background noise (speech perception in noise difficulties). Our study aimed to describe the prevalence of hearing loss and speech perception in noise difficulties in Dutch Fanconi anemia patients. Retrospective chart review. A retrospective chart review was conducted at a Dutch tertiary care center. All patients with Fanconi anemia at clinical follow-up in our hospital were included. Medical files were reviewed to collect data on hearing loss and speech perception in noise difficulties. In total, 49 Fanconi anemia patients were included. Audiograms were available in 29 patients and showed hearing loss in 16 patients (55%). Conductive hearing loss was present in 24.1%, sensorineural in 20.7%, and mixed in 10.3%. A speech in noise test was performed in 17 patients; speech perception in noise was subnormal in nine patients (52.9%) and abnormal in two patients (11.7%). Hearing loss and speech perception in noise abnormalities are common in Fanconi anemia. Therefore, pure tone audiograms and speech in noise tests should be performed, preferably already at a young age, because hearing aids or assistive listening devices could be very valuable in developing language and communication skills. 4. Laryngoscope, 127:2358-2361, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Lexical Activation in Bilinguals' Speech Production Is Dynamic: How Language Ambiguous Words Can Affect Cross-Language Activation

    ERIC Educational Resources Information Center

    Hermans, Daan; Ormel, E.; van Besselaar, Ria; van Hell, Janet

    2011-01-01

    Is the bilingual language production system a dynamic system that can operate in different language activation states? Three experiments investigated to what extent cross-language phonological co-activation effects in language production are sensitive to the composition of the stimulus list. L1 Dutch-L2 English bilinguals decided whether or not a…

  1. A Cross-Language Study of Acoustic Predictors of Speech Intelligibility in Individuals With Parkinson's Disease

    PubMed Central

    Choi, Yaelin

    2017-01-01

    Purpose The present study aimed to compare acoustic models of speech intelligibility in individuals with the same disease (Parkinson's disease [PD]) and presumably similar underlying neuropathologies but with different native languages (American English [AE] and Korean). Method A total of 48 speakers from the 4 speaker groups (AE speakers with PD, Korean speakers with PD, healthy English speakers, and healthy Korean speakers) were asked to read a paragraph in their native languages. Four acoustic variables were analyzed: acoustic vowel space, voice onset time contrast scores, normalized pairwise variability index, and articulation rate. Speech intelligibility scores were obtained from scaled estimates of sentences extracted from the paragraph. Results The findings indicated that the multiple regression models of speech intelligibility were different in Korean and AE, even with the same set of predictor variables and with speakers matched on speech intelligibility across languages. Analysis of the descriptive data for the acoustic variables showed the expected compression of the vowel space in speakers with PD in both languages, lower normalized pairwise variability index scores in Korean compared with AE, and no differences within or across language in articulation rate. Conclusions The results indicate that the basis of an intelligibility deficit in dysarthria is likely to depend on the native language of the speaker and listener. Additional research is required to explore other potential predictor variables, as well as additional language comparisons to pursue cross-linguistic considerations in classification and diagnosis of dysarthria types. PMID:28821018

  2. Cross-Linguistic Influence in Third Language Perception: L2 and L3 Perception of Japanese Contrasts

    ERIC Educational Resources Information Center

    Onishi, Hiromi

    2013-01-01

    This dissertation examines the possible influence of language learners' second language (L2) on their perception of phonological contrasts in their third language (L3). Previous studies on Third Language Acquisition (TLA) suggest various factors as possible sources of cross-linguistic influence in the acquisition of an L3. This dissertation…

  3. Identification of Pure-Tone Audiologic Thresholds for Pediatric Cochlear Implant Candidacy: A Systematic Review.

    PubMed

    de Kleijn, Jasper L; van Kalmthout, Ludwike W M; van der Vossen, Martijn J B; Vonck, Bernard M D; Topsakal, Vedat; Bruijnzeel, Hanneke

    2018-05-24

    Although current guidelines recommend cochlear implantation only for children with profound hearing impairment (HI) (>90 decibel [dB] hearing level [HL]), studies show that children with severe hearing impairment (>70-90 dB HL) could also benefit from cochlear implantation. To perform a systematic review to identify audiologic thresholds (in dB HL) that could serve as an audiologic candidacy criterion for pediatric cochlear implantation using 4 domains of speech and language development as independent outcome measures (speech production, speech perception, receptive language, and auditory performance). PubMed and Embase databases were searched up to June 28, 2017, to identify studies comparing speech and language development between children who were profoundly deaf using cochlear implants and children with severe hearing loss using hearing aids, because no studies are available directly comparing children with severe HI in both groups. If cochlear implant users with profound HI score better on speech and language tests than those with severe HI who use hearing aids, this outcome could support adjusting cochlear implantation candidacy criteria to lower audiologic thresholds. Literature search, screening, and article selection were performed using a predefined strategy. Article screening was executed independently by 4 authors in 2 pairs; consensus on article inclusion was reached by discussion between these 4 authors. This study is reported according to the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) statement. Title and abstract screening of 2822 articles resulted in selection of 130 articles for full-text review. Twenty-one studies were selected for critical appraisal, resulting in selection of 10 articles for data extraction. Two studies formulated audiologic thresholds (in dB HLs) at which children could qualify for cochlear implantation: (1) at 4-frequency pure-tone average (PTA) thresholds of 80 dB HL or greater based on speech perception and auditory performance subtests and (2) at PTA thresholds of 88 and 96 dB HL based on a speech perception subtest. In 8 of the 18 outcome measures, children with profound HI using cochlear implants performed similarly to children with severe HI using hearing aids. Better performance of cochlear implant users was shown with a picture-naming test and a speech perception in noise test. Owing to large heterogeneity in study population and selected tests, it was not possible to conduct a meta-analysis. Studies indicate that lower audiologic thresholds (≥80 dB HL) than are advised in current national and manufacturer guidelines would be appropriate as audiologic candidacy criteria for pediatric cochlear implantation.

  4. Speech-language pathology findings in Attention Deficit Hyperactivity Disorder: a systematic literature review.

    PubMed

    Machado-Nascimento, Nárli; Melo E Kümmer, Arthur; Lemos, Stela Maris Aguiar

    2016-01-01

    To systematically review the scientific production on the relationship between Attention Deficit Hyperactivity Disorder (ADHD) and Speech-language Pathology and to methodologically analyze the observational studies on the theme. Systematic review of the literature conducted at the databases Medical Literature Analysis and Retrieval System online (MEDLINE, USA), Literature of Latin America and the Caribbean Health Sciences (LILACS, Brazil) and Spanish Bibliographic Index of Health Sciences (IBECS, Spain) using the descriptors: "Language", "Language Development", "Attention Deficit Hyperactivity Disorder", "ADHD" and "Auditory Perception". Articles published between 2008 and 2013. Inclusion criteria: full articles published in national and international journals from 2008 to 2013. Exclusion criteria: articles not focused on the speech-language pathology alterations present in the attention deficit hyperactivity disorder. The articles were read in full and the data were extracted for characterization of methodology and content. The 23 articles found were separated according to two themes: Speech-language Pathology and Attention Deficit Hyperactivity Disorder. The study of the scientific production revealed that the alterations most commonly discussed were reading disorders and that there are few reports on the relationship between auditory processing and these disorders, as well as on the role of the speech-language pathologist in the evaluation and treatment of children with Attention Deficit Hyperactivity Disorder.

  5. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    PubMed

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.

  6. Auditory development in early amplified children: factors influencing auditory-based communication outcomes in children with hearing loss.

    PubMed

    Sininger, Yvonne S; Grimes, Alison; Christensen, Elizabeth

    2010-04-01

    The purpose of this study was to determine the influence of selected predictive factors, primarily age at fitting of amplification and degree of hearing loss, on auditory-based outcomes in young children with bilateral sensorineural hearing loss. Forty-four infants and toddlers, first identified with mild to profound bilateral hearing loss, who were being fitted with amplification were enrolled in the study and followed longitudinally. Subjects were otherwise typically developing with no evidence of cognitive, motor, or visual impairment. A variety of subject factors were measured or documented and used as predictor variables, including age at fitting of amplification, degree of hearing loss in the better hearing ear, cochlear implant status, intensity of oral education, parent-child interaction, and the number of languages spoken in the home. These factors were used in a linear multiple regression analysis to assess their contribution to auditory-based communication outcomes. Five outcome measures, evaluated at regular intervals in children starting at age 3, included measures of speech perception (Pediatric Speech Intelligibility and Online Imitative Test of Speech Pattern Contrast Perception), speech production (Arizona-3), and spoken language (Reynell Expressive and Receptive Language). The age at fitting of amplification ranged from 1 to 72 mo, and the degree of hearing loss ranged from mild to profound. Age at fitting of amplification showed the largest influence and was a significant factor in all outcome models. The degree of hearing loss was an important factor in the modeling of speech production and spoken language outcomes. Cochlear implant use was the other factor that contributed significantly to speech perception, speech production, and language outcomes. Other factors contributed sparsely to the models. Prospective longitudinal studies of children are important to establish relationships between subject factors and outcomes. This study clearly demonstrated the importance of early amplification on communication outcomes. This demonstration required a participant pool that included children who have been fit at very early ages and who represent all degrees of hearing loss. Limitations of longitudinal studies include selection biases. Families who enroll tend to have high levels of education and rate highly on cooperation and compliance measures. Although valuable information can be extracted from prospective studies, not all factors can be evaluated because of enrollment constraints.

  7. Prediction of psychosis across protocols and risk cohorts using automated language analysis.

    PubMed

    Corcoran, Cheryl M; Carrillo, Facundo; Fernández-Slezak, Diego; Bedi, Gillinder; Klim, Casimir; Javitt, Daniel C; Bearden, Carrie E; Cecchi, Guillermo A

    2018-02-01

    Language and speech are the primary source of data for psychiatrists to diagnose and treat mental disorders. In psychosis, the very structure of language can be disturbed, including semantic coherence (e.g., derailment and tangentiality) and syntactic complexity (e.g., concreteness). Subtle disturbances in language are evident in schizophrenia even prior to first psychosis onset, during prodromal stages. Using computer-based natural language processing analyses, we previously showed that, among English-speaking clinical (e.g., ultra) high-risk youths, baseline reduction in semantic coherence (the flow of meaning in speech) and in syntactic complexity could predict subsequent psychosis onset with high accuracy. Herein, we aimed to cross-validate these automated linguistic analytic methods in a second larger risk cohort, also English-speaking, and to discriminate speech in psychosis from normal speech. We identified an automated machine-learning speech classifier - comprising decreased semantic coherence, greater variance in that coherence, and reduced usage of possessive pronouns - that had an 83% accuracy in predicting psychosis onset (intra-protocol), a cross-validated accuracy of 79% of psychosis onset prediction in the original risk cohort (cross-protocol), and a 72% accuracy in discriminating the speech of recent-onset psychosis patients from that of healthy individuals. The classifier was highly correlated with previously identified manual linguistic predictors. Our findings support the utility and validity of automated natural language processing methods to characterize disturbances in semantics and syntax across stages of psychotic disorder. The next steps will be to apply these methods in larger risk cohorts to further test reproducibility, also in languages other than English, and identify sources of variability. This technology has the potential to improve prediction of psychosis outcome among at-risk youths and identify linguistic targets for remediation and preventive intervention. More broadly, automated linguistic analysis can be a powerful tool for diagnosis and treatment across neuropsychiatry. © 2018 World Psychiatric Association.

  8. Children's perception of their synthetically corrected speech production.

    PubMed

    Strömbergsson, Sofia; Wengelin, Asa; House, David

    2014-06-01

    We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.

  9. Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy

    PubMed Central

    Hall, Matthew L.

    2011-01-01

    Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory – perception, encoding, and recall – in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English was used for perception, memory encoding, and recall in hearing ASL-English bilinguals. Results indicate that using ASL during both perception and encoding contributes to the serial span discrepancy. Interestingly, performing recall in ASL slightly increased span, ruling out the view that signing is in general a poor choice for short-term memory. These results suggest that despite the general equivalence of sign and speech in other memory domains, speech-based representations are better suited for the specific task of perception and memory encoding of a series of unrelated verbal items in serial order through the phonological loop. This work suggests that interpretation of performance on serial recall tasks in English may not translate straightforwardly to serial tasks in sign language. PMID:21450284

  10. The effects of ethnicity, musicianship, and tone language experience on pitch perception.

    PubMed

    Zheng, Yi; Samuel, Arthur G

    2018-02-01

    Language and music are intertwined: music training can facilitate language abilities, and language experiences can also help with some music tasks. Possible language-music transfer effects are explored in two experiments in this study. In Experiment 1, we tested native Mandarin, Korean, and English speakers on a pitch discrimination task with two types of sounds: speech sounds and fundamental frequency (F0) patterns derived from speech sounds. To control for factors that might influence participants' performance, we included cognitive ability tasks testing memory and intelligence. In addition, two music skill tasks were used to examine general transfer effects from language to music. Prior studies showing that tone language speakers have an advantage on pitch tasks have been taken as support for three alternative hypotheses: specific transfer effects, general transfer effects, and an ethnicity effect. In Experiment 1, musicians outperformed non-musicians on both speech and F0 sounds, suggesting a music-to-language transfer effect. Korean and Mandarin speakers performed similarly, and they both outperformed English speakers, providing some evidence for an ethnicity effect. Alternatively, this could be due to population selection bias. In Experiment 2, we recruited Chinese Americans approximating the native English speakers' language background to further test the ethnicity effect. Chinese Americans, regardless of their tone language experiences, performed similarly to their non-Asian American counterparts in all tasks. Therefore, although this study provides additional evidence of transfer effects across music and language, it casts doubt on the contribution of ethnicity to differences observed in pitch perception and general music abilities.

  11. Speaker-independent factors affecting the perception of foreign accent in a second languagea)

    PubMed Central

    Levi, Susannah V.; Winters, Stephen J.; Pisoni, David B.

    2012-01-01

    Previous research on foreign accent perception has largely focused on speaker-dependent factors such as age of learning and length of residence. Factors that are independent of a speaker’s language learning history have also been shown to affect perception of second language speech. The present study examined the effects of two such factors—listening context and lexical frequency—on the perception of foreign-accented speech. Listeners rated foreign accent in two listening contexts: auditory-only, where listeners only heard the target stimuli, and auditory+orthography, where listeners were presented with both an auditory signal and an orthographic display of the target word. Results revealed that higher frequency words were consistently rated as less accented than lower frequency words. The effect of the listening context emerged in two interactions: the auditory +orthography context reduced the effects of lexical frequency, but increased the perceived differences between native and non-native speakers. Acoustic measurements revealed some production differences for words of different levels of lexical frequency, though these differences could not account for all of the observed interactions from the perceptual experiment. These results suggest that factors independent of the speakers’ actual speech articulations can influence the perception of degree of foreign accent. PMID:17471745

  12. Automatic detection of Parkinson's disease in running speech spoken in three different languages.

    PubMed

    Orozco-Arroyave, J R; Hönig, F; Arias-Londoño, J D; Vargas-Bonilla, J F; Daqrouq, K; Skodda, S; Rusz, J; Nöth, E

    2016-01-01

    The aim of this study is the analysis of continuous speech signals of people with Parkinson's disease (PD) considering recordings in different languages (Spanish, German, and Czech). A method for the characterization of the speech signals, based on the automatic segmentation of utterances into voiced and unvoiced frames, is addressed here. The energy content of the unvoiced sounds is modeled using 12 Mel-frequency cepstral coefficients and 25 bands scaled according to the Bark scale. Four speech tasks comprising isolated words, rapid repetition of the syllables /pa/-/ta/-/ka/, sentences, and read texts are evaluated. The method proves to be more accurate than classical approaches in the automatic classification of speech of people with PD and healthy controls. The accuracies range from 85% to 99% depending on the language and the speech task. Cross-language experiments are also performed confirming the robustness and generalization capability of the method, with accuracies ranging from 60% to 99%. This work comprises a step forward for the development of computer aided tools for the automatic assessment of dysarthric speech signals in multiple languages.

  13. Lip-read me now, hear me better later: cross-modal transfer of talker-familiarity effects.

    PubMed

    Rosenblum, Lawrence D; Miller, Rachel M; Sanchez, Kauyumari

    2007-05-01

    There is evidence that for both auditory and visual speech perception, familiarity with the talker facilitates speech recognition. Explanations of these effects have concentrated on the retention of talker information specific to each of these modalities. It could be, however, that some amodal, talker-specific articulatory-style information facilitates speech perception in both modalities. If this is true, then experience with a talker in one modality should facilitate perception of speech from that talker in the other modality. In a test of this prediction, subjects were given about 1 hr of experience lipreading a talker and were then asked to recover speech in noise from either this same talker or a different talker. Results revealed that subjects who lip-read and heard speech from the same talker performed better on the speech-in-noise task than did subjects who lip-read from one talker and then heard speech from a different talker.

  14. Speech Research: A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for Its Investigation, and Practical Applications, 1 April - 30 June 1973.

    ERIC Educational Resources Information Center

    Haskins Labs., New Haven, CT.

    This document, containing 15 articles and 2 abstracts, is a report on the current status and progress of speech research. The following topics are investigated: phonological fusion, phonetic prerequisites for first-language learning, auditory and phonetic levels of processing, auditory short-term memory in vowel perception, hemispheric…

  15. Prosody Production and Perception with Conversational Speech

    ERIC Educational Resources Information Center

    Mo, Yoonsook

    2010-01-01

    Speech utterances are more than the linear concatenation of individual phonemes or words. They are organized by prosodic structures comprising phonological units of different sizes (e.g., syllable, foot, word, and phrase) and the prominence relations among them. As the linguistic structure of spoken languages, prosody serves an important function…

  16. Listeners' Perceptions of Speech and Language Disorders

    ERIC Educational Resources Information Center

    Allard, Emily R.; Williams, Dale F.

    2008-01-01

    Using semantic differential scales with nine trait pairs, 445 adults rated five audio-taped speech samples, one depicting an individual without a disorder and four portraying communication disorders. Statistical analyses indicated that the no disorder sample was rated higher with respect to the trait of employability than were the articulation,…

  17. The development of visual speech perception in Mandarin Chinese-speaking children.

    PubMed

    Chen, Liang; Lei, Jianghua

    2017-01-01

    The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.

  18. Ten-year follow-up of a consecutive series of children with multichannel cochlear implants.

    PubMed

    Uziel, Alain S; Sillon, Martine; Vieu, Adrienne; Artieres, Françoise; Piron, Jean-Pierre; Daures, Jean-Pierre; Mondain, Michel

    2007-08-01

    To assess a group of children who consecutively received implants more than 10 years after implantation with regard to speech perception, speech intelligibility, receptive language level, and academic/occupational status. A prospective longitudinal study. Pediatric referral center for cochlear implantation. Eighty-two prelingually deafened children received the Nucleus multichannel cochlear implant. Cochlear implantation with Cochlear Nucleus CI22 implant. The main outcome measures were open-set Phonetically Balanced Kindergarten word test, discrimination of sentences in noise, connective discourse tracking (CDT) using voice and telephone, speech intelligibility rating (SIR), vocabulary knowledge measured using the Peabody Picture Vocabulary Test (Revised), academic performance on French language, foreign language, and mathematics, and academic/occupational status. After 10 years of implant experience, 79 children (96%) reported that they always wear the device; 79% (65 of 82 children) could use the telephone. The mean scores were 72% for the Phonetically Balanced Kindergarten word test, 44% for word recognition in noise, 55.3 words per minute for the CDT, and 33 words per minute for the CDT via telephone. Thirty-three children (40%) developed speech intelligible to the average listener (SIR 5), and 22 (27%) developed speech intelligible to a listener with little experience of deaf person's speech (SIR 4). The measures of vocabulary showed that most (76%) of children who received implants scored below the median value of their normally hearing peers. The age at implantation was the most important factor that may influence the postimplant outcomes. Regarding educational/vocational status, 6 subjects attend universities, 3 already have a professional activity, 14 are currently at high school level, 32 are at junior high school level, 6 additional children are enrolled in a special unit for children with disability, and 3 children are still attending elementary schools. Seventeen are in further noncompulsory education studying a range of subjects at vocational level. This long-term report shows that many profoundly hearing-impaired children using cochlear implants can develop functional levels of speech perception and production, attain age-appropriate oral language, develop competency level in a language other than their primary language, and achieve satisfactory academic performance.

  19. Identification and discrimination of bilingual talkers across languages1

    PubMed Central

    Winters, Stephen J.; Levi, Susannah V.; Pisoni, David B.

    2008-01-01

    This study investigated the extent to which language familiarity affects the perception of the indexical properties of speech by testing listeners’ identification and discrimination of bilingual talkers across two different languages. In one experiment, listeners were trained to identify bilingual talkers speaking in only one language and were then tested on their ability to identify the same talkers speaking in another language. In the second experiment, listeners discriminated between bilingual talkers across languages in an AX discrimination paradigm. The results of these experiments indicate that there is sufficient language-independent indexical information in speech for listeners to generalize knowledge of talkers’ voices across languages and to successfully discriminate between bilingual talkers regardless of the language they are speaking. However, the results of these studies also revealed that listeners do not solely rely on language-independent information when performing these tasks. Listeners use language-dependent indexical cues to identify talkers who are speaking a familiar language. Moreover, the tendency to perceive two talkers as the “same” or “different” depends on whether the talkers are speaking in the same language. The combined results of these experiments thus suggest that indexical processing relies on both language-dependent and language-independent information in the speech signal. PMID:18537401

  20. A Mozart is not a Pavarotti: singers outperform instrumentalists on foreign accent imitation

    PubMed Central

    Christiner, Markus; Reiterer, Susanne Maria

    2015-01-01

    Recent findings have shown that people with higher musical aptitude were also better in oral language imitation tasks. However, whether singing capacity and instrument playing contribute differently to the imitation of speech has been ignored so far. Research has just recently started to understand that instrumentalists develop quite distinct skills when compared to vocalists. In the same vein the role of the vocal motor system in language acquisition processes has poorly been investigated as most investigations (neurobiological and behavioral) favor to examine speech perception. We set out to test whether the vocal motor system can influence an ability to learn, produce and perceive new languages by contrasting instrumentalists and vocalists. Therefore, we investigated 96 participants, 27 instrumentalists, 33 vocalists and 36 non-musicians/non-singers. They were tested for their abilities to imitate foreign speech: unknown language (Hindi), second language (English) and their musical aptitude. Results revealed that both instrumentalists and vocalists have a higher ability to imitate unintelligible speech and foreign accents than non-musicians/non-singers. Within the musician group, vocalists outperformed instrumentalists significantly. Conclusion: First, adaptive plasticity for speech imitation is not reliant on audition alone but also on vocal-motor induced processes. Second, vocal flexibility of singers goes together with higher speech imitation aptitude. Third, vocal motor training, as of singers, may speed up foreign language acquisition processes. PMID:26379537

  1. A Mozart is not a Pavarotti: singers outperform instrumentalists on foreign accent imitation.

    PubMed

    Christiner, Markus; Reiterer, Susanne Maria

    2015-01-01

    Recent findings have shown that people with higher musical aptitude were also better in oral language imitation tasks. However, whether singing capacity and instrument playing contribute differently to the imitation of speech has been ignored so far. Research has just recently started to understand that instrumentalists develop quite distinct skills when compared to vocalists. In the same vein the role of the vocal motor system in language acquisition processes has poorly been investigated as most investigations (neurobiological and behavioral) favor to examine speech perception. We set out to test whether the vocal motor system can influence an ability to learn, produce and perceive new languages by contrasting instrumentalists and vocalists. Therefore, we investigated 96 participants, 27 instrumentalists, 33 vocalists and 36 non-musicians/non-singers. They were tested for their abilities to imitate foreign speech: unknown language (Hindi), second language (English) and their musical aptitude. Results revealed that both instrumentalists and vocalists have a higher ability to imitate unintelligible speech and foreign accents than non-musicians/non-singers. Within the musician group, vocalists outperformed instrumentalists significantly. First, adaptive plasticity for speech imitation is not reliant on audition alone but also on vocal-motor induced processes. Second, vocal flexibility of singers goes together with higher speech imitation aptitude. Third, vocal motor training, as of singers, may speed up foreign language acquisition processes.

  2. Sensitive Periods and Language in Cochlear Implant Users

    ERIC Educational Resources Information Center

    Moreno-Torres, Ignacio; Madrid-Canovas, Sonia; Blanco-Montanez, Gema

    2016-01-01

    This study explores the hypothesis that the existence of a short sensitive period for lower-level speech perception/articulation skills, and a long one for higher-level language skills, may partly explain the language outcomes of children with cochlear implants (CIs). The participants were fourteen children fitted with a CI before their second…

  3. Functional Neuroimaging of Speech Perception during a Pivotal Period in Language Acquisition

    ERIC Educational Resources Information Center

    Redcay, Elizabeth; Haist, Frank; Courchesne, Eric

    2008-01-01

    A pivotal period in the development of language occurs in the second year of life, when language comprehension undergoes rapid acceleration. However, the brain bases of these advances remain speculative as there is currently no functional magnetic resonance imaging (fMRI) data from healthy, typically developing toddlers at this age. We…

  4. Language and Culture in the Multiethnic Community: Spoken Language Assessment.

    ERIC Educational Resources Information Center

    Matluck, Joseph H.; Mace-Matluck, Betty J.

    This paper discusses the sociolinguistic problems inherent in multilingual testing, and the accompanying dangers of cultural bias in either the visuals or the language used in a given test. The first section discusses English-speaking Americans' perception of foreign speakers in terms of: (1) physical features; (2) speech, specifically vocabulary,…

  5. A Longitudinal Study of Pragmatic Language Development in Three Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Dammeyer, Jesper

    2012-01-01

    Research has shown how cochlear implants (CIs), in children with hearing impairments, have improved speech perception and production, but very little is known about the children's pragmatic language development. During a 4-year longitudinal study of three children with CIs, certain aspects of pragmatic language development were observed in free…

  6. Speech rhythm analysis with decomposition of the amplitude envelope: characterizing rhythmic patterns within and across languages.

    PubMed

    Tilsen, Sam; Arvaniti, Amalia

    2013-07-01

    This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.

  7. Speech, Language, and Reading in 10-Year-Olds With Cleft: Associations With Teasing, Satisfaction With Speech, and Psychological Adjustment.

    PubMed

    Feragen, Kristin Billaud; Særvold, Tone Kristin; Aukner, Ragnhild; Stock, Nicola Marie

    2017-03-01

      Despite the use of multidisciplinary services, little research has addressed issues involved in the care of those with cleft lip and/or palate across disciplines. The aim was to investigate associations between speech, language, reading, and reports of teasing, subjective satisfaction with speech, and psychological adjustment.   Cross-sectional data collected during routine, multidisciplinary assessments in a centralized treatment setting, including speech and language therapists and clinical psychologists.   Children with cleft with palatal involvement aged 10 years from three birth cohorts (N = 170) and their parents.   Speech: SVANTE-N. Language: Language 6-16 (sentence recall, serial recall, vocabulary, and phonological awareness). Reading: Word Chain Test and Reading Comprehension Test. Psychological measures: Strengths and Difficulties Questionnaire and extracts from the Satisfaction With Appearance Scale and Child Experience Questionnaire.   Reading skills were associated with self- and parent-reported psychological adjustment in the child. Subjective satisfaction with speech was associated with psychological adjustment, while not being consistently associated with speech therapists' assessments. Parent-reported teasing was found to be associated with lower levels of reading skills. Having a medical and/or psychological condition in addition to the cleft was found to affect speech, language, and reading significantly.   Cleft teams need to be aware of speech, language, and/or reading problems as potential indicators of psychological risk in children with cleft. This study highlights the importance of multiple reports (self, parent, and specialist) and a multidisciplinary approach to cleft care and research.

  8. Pronunciation in Foreign Language: How to Train? Effects of Different Kinds of Training in Perception and Production of the Italian Consonant / ?/ by Adult German Learners

    ERIC Educational Resources Information Center

    Macedonia, Manuela

    2014-01-01

    This study investigates the role of perception and sensory motor learning on speech production in L2. Compared to natural language learning, acoustic input in formal adult instruction is deprived of multiple sensory motor cues and lacks the imitation component. Consequently, it is possible that inaccurate pronunciation results from training.…

  9. Speech Perception in Noise by Children With Cochlear Implants

    PubMed Central

    Caldwell, Amanda; Nittrouer, Susan

    2013-01-01

    Purpose Common wisdom suggests that listening in noise poses disproportionately greater difficulty for listeners with cochlear implants (CIs) than for peers with normal hearing (NH). The purpose of this study was to examine phonological, language, and cognitive skills that might help explain speech-in-noise abilities for children with CIs. Method Three groups of kindergartners (NH, hearing aid wearers, and CI users) were tested on speech recognition in quiet and noise and on tasks thought to underlie the abilities that fit into the domains of phonological awareness, general language, and cognitive skills. These last measures were used as predictor variables in regression analyses with speech-in-noise scores as dependent variables. Results Compared to children with NH, children with CIs did not perform as well on speech recognition in noise or on most other measures, including recognition in quiet. Two surprising results were that (a) noise effects were consistent across groups and (b) scores on other measures did not explain any group differences in speech recognition. Conclusions Limitations of implant processing take their primary toll on recognition in quiet and account for poor speech recognition and language/phonological deficits in children with CIs. Implications are that teachers/clinicians need to teach language/phonology directly and maximize signal-to-noise levels in the classroom. PMID:22744138

  10. Cross-Linguistic Comparison of Rhythmic and Phonotactic Similarity

    ERIC Educational Resources Information Center

    Stojanovic, Diana

    2013-01-01

    Literature on speech rhythm has been focused on three major questions: whether languages have rhythms that can be classified into a small number of types, what the criteria are for the membership in each class, and whether the perceived rhythmic similarity between languages can be quantified based on properties found in the speech signal. Claims…

  11. Linguistic Flexibility Modulates Speech Planning for Causative Motion Events: A Cross-Linguistic Study of Mandarin and English

    ERIC Educational Resources Information Center

    Zheng, Chun

    2017-01-01

    Producing a sensible utterance requires speakers to select conceptual content, lexical items, and syntactic structures almost instantaneously during speech planning. Each language offers its speakers flexibility in the selection of lexical and syntactic options to talk about the same scenarios involving movement. Languages also vary typologically…

  12. The speech naturalness of people who stutter speaking under delayed auditory feedback as perceived by different groups of listeners.

    PubMed

    Van Borsel, John; Eeckhout, Hannelore

    2008-09-01

    This study investigated listeners' perception of the speech naturalness of people who stutter (PWS) speaking under delayed auditory feedback (DAF) with particular attention for possible listener differences. Three panels of judges consisting of 14 stuttering individuals, 14 speech language pathologists, and 14 naive listeners rated the naturalness of speech samples of stuttering and non-stuttering individuals using a 9-point interval scale. Results clearly indicate that these three groups evaluate naturalness differently. Naive listeners appear to be more severe in their judgements than speech language pathologists and stuttering listeners, and speech language pathologists are apparently more severe than PWS. The three listener groups showed similar trends with respect to the relationship between speech naturalness and speech rate. Results of all three indicated that for PWS, the slower a speaker's rate was, the less natural speech was judged to sound. The three listener groups also showed similar trends with regard to naturalness of the stuttering versus the non-stuttering individuals. All three panels considered the speech of the non-stuttering participants more natural. The reader will be able to: (1) discuss the speech naturalness of people who stutter speaking under delayed auditory feedback, (2) discuss listener differences about the naturalness of people who stutter speaking under delayed auditory feedback, and (3) discuss the importance of speech rate for the naturalness of speech.

  13. Using visible speech to train perception and production of speech for individuals with hearing loss.

    PubMed

    Massaro, Dominic W; Light, Joanna

    2004-04-01

    The main goal of this study was to implement a computer-animated talking head, Baldi, as a language tutor for speech perception and production for individuals with hearing loss. Baldi can speak slowly; illustrate articulation by making the skin transparent to reveal the tongue, teeth, and palate; and show supplementary articulatory features, such as vibration of the neck to show voicing and turbulent airflow to show frication. Seven students with hearing loss between the ages of 8 and 13 were trained for 6 hours across 21 weeks on 8 categories of segments (4 voiced vs. voiceless distinctions, 3 consonant cluster distinctions, and 1 fricative vs. affricate distinction). Training included practice at the segment and the word level. Perception and production improved for each of the 7 children. Speech production also generalized to new words not included in the training lessons. Finally, speech production deteriorated somewhat after 6 weeks without training, indicating that the training method rather than some other experience was responsible for the improvement that was found.

  14. A cross-linguistic study of the development of gesture and speech in Zulu and French oral narratives.

    PubMed

    Kunene Nicolas, Ramona; Guidetti, Michèle; Colletta, Jean-Marc

    2017-01-01

    The present study reports on a developmental and cross-linguistic study of oral narratives produced by speakers of Zulu (a Bantu language) and French (a Romance language). Specifically, we focus on oral narrative performance as a bimodal (i.e., linguistic and gestural) behaviour during the late language acquisition phase. We analyzed seventy-two oral narratives produced by L1 Zulu and French adults and primary school children aged between five and ten years old. The data were all collected using a narrative retelling task. The results revealed a strong effect of age on discourse performance, confirming that narrative abilities improve with age, irrespective of language. However, the results also showed cross-linguistic differences. Zulu oral narratives were longer, more detailed, and accompanied by more co-speech gestures than the French narratives. The parallel effect of age and language on gestural behaviour is discussed and highlights the importance of studying oral narratives from a multimodal perspective within a cross-linguistic framework.

  15. Influences on infant speech processing: toward a new synthesis.

    PubMed

    Werker, J F; Tees, R C

    1999-01-01

    To comprehend and produce language, we must be able to recognize the sound patterns of our language and the rules for how these sounds "map on" to meaning. Human infants are born with a remarkable array of perceptual sensitivities that allow them to detect the basic properties that are common to the world's languages. During the first year of life, these sensitivities undergo modification reflecting an exquisite tuning to just that phonological information that is needed to map sound to meaning in the native language. We review this transition from language-general to language-specific perceptual sensitivity that occurs during the first year of life and consider whether the changes propel the child into word learning. To account for the broad-based initial sensitivities and subsequent reorganizations, we offer an integrated transactional framework based on the notion of a specialized perceptual-motor system that has evolved to serve human speech, but which functions in concert with other developing abilities. In so doing, we highlight the links between infant speech perception, babbling, and word learning.

  16. Speech Perception in Noise by Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Caldwell, Amanda; Nittrouer, Susan

    2013-01-01

    Purpose: Common wisdom suggests that listening in noise poses disproportionately greater difficulty for listeners with cochlear implants (CIs) than for peers with normal hearing (NH). The purpose of this study was to examine phonological, language, and cognitive skills that might help explain speech-in-noise abilities for children with CIs.…

  17. Orthography and Modality Influence Speech Production in Adults and Children

    ERIC Educational Resources Information Center

    Saletta, Meredith; Goffman, Lisa; Hogan, Tiffany P.

    2016-01-01

    Purpose: The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Method: Children (n = 17), adults with typical reading skills (n = 17), and adults…

  18. Cross-Linguistic Differences in Bilinguals' Fundamental Frequency Ranges

    ERIC Educational Resources Information Center

    Ordin, Mikhail; Mennen, Ineke

    2017-01-01

    Purpose: We investigated cross-linguistic differences in fundamental frequency range (FFR) in Welsh-English bilingual speech. This is the first study that reports gender-specific behavior in switching FFRs across languages in bilingual speech. Method: FFR was conceptualized as a behavioral pattern using measures of span (range of fundamental…

  19. Prevalence of communication, swallowing and orofacial myofunctional disorders in children and adolescents at the time of admission at a cancer hospital.

    PubMed

    Coça, Kaliani Lima; Bergmann, Anke; Ferman, Sima; Angelis, Elisabete Carrara de; Ribeiro, Márcia Gonçalves

    2018-03-01

    Describe the prevalence of communication, swallowing and orofacial myofunctional disorders in a group of children and adolescents at the time of registration at a cancer hospital. A cross-sectional study conducted with children aged ≥2 and adolescents, of both genders, admitted to the Pediatric Oncology Section of the Instituto Nacional de Câncer José de Alencar Gomes da Silva (INCA) from March 2014 to April 2015 for investigation and/or treatment of solid tumors. A protocol was used to record the sociodemographic and clinical information and findings of the speech-language pathology clinical evaluation, which included aspects of the oral sensorimotor system, swallowing, speech, language, voice, and hearing. Eighty-eight children/adolescents (41.3%) presented some type of speech-language disorder. The most frequent speech-language disorders were orofacial myofunctional disorder, dysphonia, and language impairments, whereas the less frequent ones were dysacusis, tongue paralysis, and trismus. Site of the lesion was the clinical variable that presented statistically significant correlation with presence of speech-language disorders. High prevalence of speech-language disorders was observed in children and adolescents at the time of admission at a cancer hospital. Occurrence of speech-language disorders was higher in participants with lesions in the central nervous system and in the head and neck region.

  20. Music Training Can Improve Music and Speech Perception in Pediatric Mandarin-Speaking Cochlear Implant Users.

    PubMed

    Cheng, Xiaoting; Liu, Yangwenyi; Shu, Yilai; Tao, Duo-Duo; Wang, Bing; Yuan, Yasheng; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2018-01-01

    Due to limited spectral resolution, cochlear implants (CIs) do not convey pitch information very well. Pitch cues are important for perception of music and tonal language; it is possible that music training may improve performance in both listening tasks. In this study, we investigated music training outcomes in terms of perception of music, lexical tones, and sentences in 22 young (4.8 to 9.3 years old), prelingually deaf Mandarin-speaking CI users. Music perception was measured using a melodic contour identification (MCI) task. Speech perception was measured for lexical tones and sentences presented in quiet. Subjects received 8 weeks of MCI training using pitch ranges not used for testing. Music and speech perception were measured at 2, 4, and 8 weeks after training was begun; follow-up measures were made 4 weeks after training was stopped. Mean baseline performance was 33.2%, 76.9%, and 45.8% correct for MCI, lexical tone recognition, and sentence recognition, respectively. After 8 weeks of MCI training, mean performance significantly improved by 22.9, 14.4, and 14.5 percentage points for MCI, lexical tone recognition, and sentence recognition, respectively ( p < .05 in all cases). Four weeks after training was stopped, there was no significant change in posttraining music and speech performance. The results suggest that music training can significantly improve pediatric Mandarin-speaking CI users' music and speech perception.

  1. Phonological encoding in speech-sound disorder: evidence from a cross-modal priming experiment.

    PubMed

    Munson, Benjamin; Krause, Miriam O P

    2017-05-01

    Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to encode lexical items that have been accessed from memory. Thirty-six children (18 with TD, 18 with SSD) viewed pictures while listening to interfering words (IW) or a non-linguistic auditory stimulus presented over headphones either 150 ms before, concurrent with or 150 ms after picture presentation. The phonological similarity of the IW and the pictures' names varied. Picture-naming latency, accuracy and duration were tallied. All children named pictures more quickly in the presence of an IW identical to the picture's name than in the other conditions. At the +150 ms stimulus onset asynchrony, pictures were named more quickly when the IW shared phonemes with the picture's name than when they were phonologically unrelated to the picture's name. The size of this effect was similar for children with SSD and children with TD. Variation in the magnitude of inhibition and facilitation on cross-modal priming tasks across children was more strongly affected by the size of the expressive and receptive lexicons than by speech-production accuracy. Results suggest that SSD is not associated with reduced phonological encoding ability, at least as it is reflected by cross-modal naming tasks. © 2016 Royal College of Speech and Language Therapists.

  2. American or British? L2 Speakers' Recognition and Evaluations of Accent Features in English

    ERIC Educational Resources Information Center

    Carrie, Erin; McKenzie, Robert M.

    2018-01-01

    Recent language attitude research has attended to the processes involved in identifying and evaluating spoken language varieties. This article investigates the ability of second-language learners of English in Spain (N = 71) to identify Received Pronunciation (RP) and General American (GenAm) speech and their perceptions of linguistic variation…

  3. Linguistic Profiles of Children with CI as Compared with Children with Hearing or Specific Language Impairment

    ERIC Educational Resources Information Center

    Hoog, Brigitte E.; Langereis, Margreet C.; Weerdenburg, Marjolijn; Knoors, Harry E. T.; Verhoeven, Ludo

    2016-01-01

    Background: The spoken language difficulties of children with moderate or severe to profound hearing loss are mainly related to limited auditory speech perception. However, degraded or filtered auditory input as evidenced in children with cochlear implants (CIs) may result in less efficient or slower language processing as well. To provide insight…

  4. AAC and Early Intervention for Children with Cerebral Palsy: Parent Perceptions and Child Risk Factors

    PubMed Central

    Smith, Ashlyn L.; Hustad, Katherine C.

    2015-01-01

    The current study examined parent perceptions of communication, the focus of early intervention goals and strategies, and factors predicting the implementation of augmentative and alternative communication (AAC) for 26, 2-year-old children with cerebral palsy. Parents completed a communication questionnaire and provided early intervention plans detailing child speech and language goals. Results indicated that receptive language had the strongest association with parent perceptions of communication. Children who were not talking received a greater number of intervention goals, had a greater variety of goals, and had more AAC goals than children who were emerging and established talkers. Finally, expressive language had the strongest influence on AAC decisions. Results are discussed in terms of the relationship between parent perceptions and language skills, communication as an emphasis in early intervention, AAC intervention decisions, and the importance of receptive language. PMID:26401966

  5. A Cross-Language Study of Acoustic Predictors of Speech Intelligibility in Individuals with Parkinson's Disease

    ERIC Educational Resources Information Center

    Kim, Yunjung; Choi, Yaelin

    2017-01-01

    Purpose: The present study aimed to compare acoustic models of speech intelligibility in individuals with the same disease (Parkinson's disease [PD]) and presumably similar underlying neuropathologies but with different native languages (American English [AE] and Korean). Method: A total of 48 speakers from the 4 speaker groups (AE speakers with…

  6. Temporal Variability and Stability in Infant-Directed Sung Speech: Evidence for Language-Specific Patterns

    ERIC Educational Resources Information Center

    Falk, Simone

    2011-01-01

    In this paper, sung speech is used as a methodological tool to explore temporal variability in the timing of word-internal consonants and vowels. It is hypothesized that temporal variability/stability becomes clearer under the varying rhythmical conditions induced by song. This is explored cross-linguistically in German--a language that exhibits a…

  7. Issues in the Development of Cross-Cultural Assessments of Speech and Language for Children

    ERIC Educational Resources Information Center

    Carter, Julie A.; Lees, Janet A.; Murira, Gladys M.; Gona, Joseph; Neville, Brian G. R.; Newton, Charles R. J. C.

    2005-01-01

    Background: There is an increasing demand for the assessment of speech and language in clinical and research situations in countries where there are few assessment resources. Due to the nature of cultural variation and the potential for cultural bias, new assessment tools need to be developed or existing tools require adaptation. However, there…

  8. Infants with Williams syndrome detect statistical regularities in continuous speech.

    PubMed

    Cashon, Cara H; Ha, Oh-Ryeong; Graf Estes, Katharine; Saffran, Jenny R; Mervis, Carolyn B

    2016-09-01

    Williams syndrome (WS) is a rare genetic disorder associated with delays in language and cognitive development. The reasons for the language delay are unknown. Statistical learning is a domain-general mechanism recruited for early language acquisition. In the present study, we investigated whether infants with WS were able to detect the statistical structure in continuous speech. Eighteen 8- to 20-month-olds with WS were familiarized with 2min of a continuous stream of synthesized nonsense words; the statistical structure of the speech was the only cue to word boundaries. They were tested on their ability to discriminate statistically-defined "words" and "part-words" (which crossed word boundaries) in the artificial language. Despite significant cognitive and language delays, infants with WS were able to detect the statistical regularities in the speech stream. These findings suggest that an inability to track the statistical properties of speech is unlikely to be the primary basis for the delays in the onset of language observed in infants with WS. These results provide the first evidence of statistical learning by infants with developmental delays. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Neural reuse of action perception circuits for language, concepts and communication.

    PubMed

    Pulvermüller, Friedemann

    2018-01-01

    Neurocognitive and neurolinguistics theories make explicit statements relating specialized cognitive and linguistic processes to specific brain loci. These linking hypotheses are in need of neurobiological justification and explanation. Recent mathematical models of human language mechanisms constrained by fundamental neuroscience principles and established knowledge about comparative neuroanatomy offer explanations for where, when and how language is processed in the human brain. In these models, network structure and connectivity along with action- and perception-induced correlation of neuronal activity co-determine neurocognitive mechanisms. Language learning leads to the formation of action perception circuits (APCs) with specific distributions across cortical areas. Cognitive and linguistic processes such as speech production, comprehension, verbal working memory and prediction are modelled by activity dynamics in these APCs, and combinatorial and communicative-interactive knowledge is organized in the dynamics within, and connections between APCs. The network models and, in particular, the concept of distributionally-specific circuits, can account for some previously not well understood facts about the cortical 'hubs' for semantic processing and the motor system's role in language understanding and speech sound recognition. A review of experimental data evaluates predictions of the APC model and alternative theories, also providing detailed discussion of some seemingly contradictory findings. Throughout, recent disputes about the role of mirror neurons and grounded cognition in language and communication are assessed critically. Copyright © 2017 The Author. Published by Elsevier Ltd.. All rights reserved.

  10. Effects of central nervous system residua on cochlear implant results in children deafened by meningitis.

    PubMed

    Francis, Howard W; Pulsifer, Margaret B; Chinnici, Jill; Nutt, Robert; Venick, Holly S; Yeagle, Jennifer D; Niparko, John K

    2004-05-01

    This study explored factors associated with speech recognition outcomes in postmeningitic deafness (PMD). The results of cochlear implantation may vary in children with PMD because of sequelae that extend beyond the auditory periphery. To determine which factors might be most determinative of outcome of cochlear implantation in children with PMD. Retrospective chart review. A referral center for pediatric cochlear implantation and rehabilitation. Thirty children with cochlear implants who were deafened by meningitis were matched with subjects who were deafened by other causes based on the age at diagnosis, age at cochlear implantation, age at which hearing aids were first used, and method of communication used at home or in the classroom. Speech perception performance within the first 2 years after cochlear implantation and its relationship with presurgical cognitive measures and medical history. There was no difference in the overall cognitive or postoperative speech perception performance between the children with PMD and those deafened by other causes. The presence of postmeningitic hydrocephalus, however, posed greater challenges to the rehabilitation process, as indicated by significantly smaller gains in speech perception and a predilection for behavioral problems. By comparison, cochlear scarring and incomplete electrode insertion had no impact on speech perception results. Although the results demonstrated no significant delay in cognitive or speech perception performance in the PMD group, central nervous system residua, when present, can impede the acquisition of speech perception with a cochlear implant. Central effects associated with PMD may thus impact language learning potential; cognitive and behavioral therapy should be considered in rehabilitative planning and in establishing expectations of outcome.

  11. Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.

    PubMed

    Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta

    2009-01-01

    In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.

  12. Student diversity and implications for clinical competency development amongst domestic and international speech-language pathology students.

    PubMed

    Attrill, Stacie; Lincoln, Michelle; McAllister, Sue

    2012-06-01

    International students graduating from speech-language pathology university courses must achieve the same minimum competency standards as domestic students. This study aimed to collect descriptive information about the number, origin, and placement performance of international students as well as perceptions of the performance of international students on placement. University Clinical Education Coordinators (CECs), who manage clinical placements in eight undergraduate and six graduate entry programs across the 10 participating universities in Australia and New Zealand completed a survey about 3455 international and domestic speech-language pathology students. Survey responses were analysed quantitatively and qualitatively with non-parametric statistics and thematic analysis. Results indicated that international students came from a variety of countries, but with a regional focus on the countries of Central and Southern Asia. Although domestic students were noted to experience significantly less placement failure, fewer supplementary placements, and reduced additional placement support than international students, the effect size of these relationships was consistently small and therefore weak. CECs rated international students as more frequently experiencing difficulties with communication competencies on placement. However, CECs qualitative comments revealed that culturally and linguistically diverse (CALD) students may experience more difficulties with speech-language pathology competency development than international students. Students' CALD status should be included in future investigations of factors influencing speech-language pathology competency development.

  13. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants

    PubMed Central

    Gordon, Karen A.; Papsin, Blake C.; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A.

    2017-01-01

    Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation. PMID:28085739

  14. Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants.

    PubMed

    Good, Arla; Gordon, Karen A; Papsin, Blake C; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A

    Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.

  15. Effects of social cognitive impairment on speech disorder in schizophrenia.

    PubMed

    Docherty, Nancy M; McCleery, Amanda; Divilbiss, Marielle; Schumann, Emily B; Moe, Aubrey; Shakeel, Mohammed K

    2013-05-01

    Disordered speech in schizophrenia impairs social functioning because it impedes communication with others. Treatment approaches targeting this symptom have been limited by an incomplete understanding of its causes. This study examined the process underpinnings of speech disorder, assessed in terms of communication failure. Contributions of impairments in 2 social cognitive abilities, emotion perception and theory of mind (ToM), to speech disorder were assessed in 63 patients with schizophrenia or schizoaffective disorder and 21 nonpsychiatric participants, after controlling for the effects of verbal intelligence and impairments in basic language-related neurocognitive abilities. After removal of the effects of the neurocognitive variables, impairments in emotion perception and ToM each explained additional variance in speech disorder in the patients but not the controls. The neurocognitive and social cognitive variables, taken together, explained 51% of the variance in speech disorder in the patients. Schizophrenic disordered speech may be less a concomitant of "positive" psychotic process than of illness-related limitations in neurocognitive and social cognitive functioning.

  16. Why do speech and language therapists stay in, leave and (sometimes) return to the National Health Service (NHS)?

    PubMed

    Loan-Clarke, John; Arnold, John; Coombs, Crispin; Bosley, Sara; Martin, Caroline

    2009-01-01

    Research into recruitment, retention and return of speech and language therapists in the National Health Service (NHS) is relatively limited, particularly in respect of understanding the factors that drive employment choice decisions. To identify what factors influence speech and language therapists working in the NHS to stay, and consider leaving, but not do so. To identify what features of the NHS and alternative employers influence speech and language therapists to leave the NHS. To identify why some speech and language therapists return to the NHS after working elsewhere. A total of 516 male and female speech and language therapists, in three distinct groups (NHS stayers, leavers and returners) completed a questionnaire and gave responses to open-ended questions regarding their perceptions of the NHS and other employers. Qualitative data analysis identified reasons why individuals stayed in, left or returned to the NHS employment, and what actions could be taken by management to facilitate retention and return. Stayers value job and pension security; professional development opportunities; the work itself; and professional support. Leavers not involved in childcare left because of workload/pressure/stress; poor pay; and not being able to give good patient care. Returners returned because of flexible hours; work location; professional development; and pension provision. Stayers and returners primarily wish to see more staff in the NHS, whereas leavers would return if there were more flexibility in work arrangements. Returners were particularly hostile towards Agenda for Change. Whilst some preferences appear to require increased resources, others such as reducing bureaucracy and valuing professionals do not. The full impact of Agenda for Change has yet to be established. Predicted excess labour supply of allied health professionals and future structural changes present pressures but also possible opportunities for speech and language therapists.

  17. Audiovisual speech perception in infancy: The influence of vowel identity and infants' productive abilities on sensitivity to (mis)matches between auditory and visual speech cues.

    PubMed

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-02-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds on their ability to detect mismatches between concurrently presented auditory and visual vowels and related their performance to their productive abilities and later vocabulary size. Results show that infants' ability to detect mismatches between auditory and visually presented vowels differs depending on the vowels involved. Furthermore, infants' sensitivity to mismatches is modulated by their current articulatory knowledge and correlates with their vocabulary size at 12 months of age. This suggests that-aside from infants' ability to match nonnative audiovisual cues (Pons et al., 2009)-their ability to match native auditory and visual cues continues to develop during the first year of life. Our findings point to a potential role of salient vowel cues and productive abilities in the development of audiovisual speech perception, and further indicate a relation between infants' early sensitivity to audiovisual speech cues and their later language development. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  18. Children Discover the Spectral Skeletons in Their Native Language before the Amplitude Envelopes

    ERIC Educational Resources Information Center

    Nittrouer, Susan; Lowenstein, Joanna H.; Packer, Robert R.

    2009-01-01

    Much of speech perception research has focused on brief spectro-temporal properties in the signal, but some studies have shown that adults can recover linguistic form when those properties are absent. In this experiment, 7-year-old English-speaking children demonstrated adultlike abilities to understand speech when only sine waves (SWs)…

  19. Native Speakers' Perceptions of Fluency and Accent in L2 Speech

    ERIC Educational Resources Information Center

    Pinget, Anne-France; Bosker, Hans Rutger; Quené, Hugo; de Jong, Nivja H.

    2014-01-01

    Oral fluency and foreign accent distinguish L2 from L1 speech production. In language testing practices, both fluency and accent are usually assessed by raters. This study investigates what exactly native raters of fluency and accent take into account when judging L2. Our aim is to explore the relationship between objectively measured temporal,…

  20. Another Look at Cross-Language Competition in Bilingual Speech Production: Lexical and Phonological Factors

    ERIC Educational Resources Information Center

    Costa, Albert; Colome, Angels; Gomez, Olga; Sebastian-Galles, Nuria

    2003-01-01

    How does lexical selection function in highly-proficient bilingual speakers? What is the role of the non-response language during the course of lexicalization? Evidence of cross-language interference was obtained by Hermans, Bongaerts, De Bot and Schreuder (1998) using the picture-word interference paradigm: participants took longer to name the…

  1. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    PubMed

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  2. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    PubMed Central

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  3. Differential Neural Contributions to Native- and Foreign-Language Talker Identification

    ERIC Educational Resources Information Center

    Perrachione, Tyler K.; Pierrehumbert, Janet B.; Wong, Patrick C. M.

    2009-01-01

    Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system's ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies…

  4. Rater Judgment and English Language Speaking Proficiency. Research Report

    ERIC Educational Resources Information Center

    Chalhoub-Deville, Micheline; Wigglesworth, Gillian

    2005-01-01

    The paper investigates whether there is a shared perception of speaking proficiency among raters from different English speaking countries. More specifically, this study examines whether there is a significant difference among English language learning (ELL) teachers, residing in Australia, Canada, the UK, and the USA when rating speech samples of…

  5. Suggested Outline for Auditory Perception Training.

    ERIC Educational Resources Information Center

    Kelley, Clare A.

    Presented are suggestions for speech therapists to use in auditory perception training and screening of language handicapped children in kindergarten through grade 3. Directions are given for using the program, which is based on games. Each component is presented in terms of purpose, materials, a description of the game, and directions for…

  6. Individual differences in language and working memory affect children's speech recognition in noise.

    PubMed

    McCreery, Ryan W; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc

    2017-05-01

    We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Ninety-six children with normal hearing, who were between 5 and 12 years of age. Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.

  7. Dynamic Range for Speech Materials in Korean, English, and Mandarin: A Cross-Language Comparison

    ERIC Educational Resources Information Center

    Jin, In-Ki; Kates, James M.; Arehart, Kathryn H.

    2014-01-01

    Purpose: The purpose of this study was to identify whether differences in dynamic range (DR) are evident across the spoken languages of Korean, English, and Mandarin. Method: Recorded sentence-level speech materials were used as stimuli. DR was quantified using different definitions of DR (defined as the range in decibels from the highest to the…

  8. Refusing in a Foreign Language: An Investigation of Problems Encountered by Chinese Learners of English

    ERIC Educational Resources Information Center

    Chang, Yuh-Fang

    2011-01-01

    Whereas the speech act of refusal is universal across language, the politeness value and the types of linguistic forms used to perform it vary across language and culture. The majority of the comparative pragmatic research findings were derived from one single source of data (i.e., either production data or perception data). Few attempts have been…

  9. Improving Your American English Pronunciation: Intonation. Creativity: New Ideas in Language Teaching, No. 20.

    ERIC Educational Resources Information Center

    Gomes de Matos, F.; Short, A. Green

    If the non-native teacher of English as a foreign language hopes to approach a high standard of oral competence in the language, he must cultivate some conscious perception and control of intonation. He can achieve this objective in various ways. Situations in which natural speech occurs would be ideal but few teachers have the opportunity for…

  10. Effects of Within-Talker Variability on Speech Intelligibility in Mandarin-Speaking Adult and Pediatric Cochlear Implant Patients

    PubMed Central

    Su, Qiaotong; Galvin, John J.; Zhang, Guoping; Li, Yongxin

    2016-01-01

    Cochlear implant (CI) speech performance is typically evaluated using well-enunciated speech produced at a normal rate by a single talker. CI users often have greater difficulty with variations in speech production encountered in everyday listening. Within a single talker, speaking rate, amplitude, duration, and voice pitch information may be quite variable, depending on the production context. The coarse spectral resolution afforded by the CI limits perception of voice pitch, which is an important cue for speech prosody and for tonal languages such as Mandarin Chinese. In this study, sentence recognition from the Mandarin speech perception database was measured in adult and pediatric Mandarin-speaking CI listeners for a variety of speaking styles: voiced speech produced at slow, normal, and fast speaking rates; whispered speech; voiced emotional speech; and voiced shouted speech. Recognition of Mandarin Hearing in Noise Test sentences was also measured. Results showed that performance was significantly poorer with whispered speech relative to the other speaking styles and that performance was significantly better with slow speech than with fast or emotional speech. Results also showed that adult and pediatric performance was significantly poorer with Mandarin Hearing in Noise Test than with Mandarin speech perception sentences at the normal rate. The results suggest that adult and pediatric Mandarin-speaking CI patients are highly susceptible to whispered speech, due to the lack of lexically important voice pitch cues and perhaps other qualities associated with whispered speech. The results also suggest that test materials may contribute to differences in performance observed between adult and pediatric CI users. PMID:27363714

  11. Speech Recognition and Parent Ratings From Auditory Development Questionnaires in Children Who Are Hard of Hearing.

    PubMed

    McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.

  12. The Effect of Intensified Language Exposure on Accommodating Talker Variability.

    PubMed

    Antoniou, Mark; Wong, Patrick C M; Wang, Suiping

    2015-06-01

    This study systematically examined the role of intensified exposure to a second language on accommodating talker variability. English native listeners (n = 37) were compared with Mandarin listeners who had either lived in the United States for an extended period of time (n = 33) or had lived only in China (n = 44). Listeners responded to target words in an English word-monitoring task in which sequences of words were randomized. Half of the sequences were spoken by a single talker and the other half by multiple talkers. Mandarin listeners living in China were slower and less accurate than both English listeners and Mandarin listeners living in the United States. Mandarin listeners living in the United States were less accurate than English natives only in the more cognitively demanding mixed-talker condition. Mixed-talker speech affects processing in native and nonnative listeners alike, although the decrement is larger in nonnatives and further exaggerated in less proficient listeners. Language immersion improves listeners' ability to resolve talker variability, and this suggests that immersion may automatize nonnative processing, freeing cognitive resources that may play a crucial role in speech perception. These results lend support to the active control model of speech perception.

  13. How Language Is Embodied in Bilinguals and Children with Specific Language Impairment

    PubMed Central

    Adams, Ashley M.

    2016-01-01

    This manuscript explores the role of embodied views of language comprehension and production in bilingualism and specific language impairment. Reconceptualizing popular models of bilingual language processing, the embodied theory is first extended to this area. Issues such as semantic grounding in a second language and potential differences between early and late acquisition of a second language are discussed. Predictions are made about how this theory informs novel ways of thinking about teaching a second language. Secondly, the comorbidity of speech, language, and motor impairments and how embodiment theory informs the discussion of the etiology of these impairments is examined. A hypothesis is presented suggesting that what is often referred to as specific language impairment may not be so specific due to widespread subclinical motor deficits in this population. Predictions are made about how weaknesses and instabilities in speech motor control, even at a subclinical level, may disrupt the neural network that connects acoustic input, articulatory motor plans, and semantics. Finally, I make predictions about how this information informs clinical practice for professionals such as speech language pathologists and occupational and physical therapists. These new hypotheses are placed within the larger framework of the body of work pertaining to semantic grounding, action-based language acquisition, and action-perception links that underlie language learning and conceptual grounding. PMID:27582716

  14. Individual differences in language ability are related to variation in word recognition, not speech perception: evidence from eye movements.

    PubMed

    McMurray, Bob; Munson, Cheyenne; Tomblin, J Bruce

    2014-08-01

    The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Adolescents with a range of language abilities (N = 74, including 35 impaired) participated in an experiment based on McMurray, Tanenhaus, and Aslin (2002). Participants heard tokens from six 9-step voice onset time (VOT) continua spanning 2 words (beach/peach, beak/peak, etc.) while viewing a screen containing pictures of those words and 2 unrelated objects. Participants selected the referent while eye movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Eye movements were sensitive to within-category VOT differences: As VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences in phonological categorization or auditory abilities.

  15. Cross-Channel Amplitude Sweeps Are Crucial to Speech Intelligibility

    ERIC Educational Resources Information Center

    Prendergast, Garreth; Green, Gary G. R.

    2012-01-01

    Classical views of speech perception argue that the static and dynamic characteristics of spectral energy peaks (formants) are the acoustic features that underpin phoneme recognition. Here we use representations where the amplitude modulations of sub-band filtered speech are described, precisely, in terms of co-sinusoidal pulses. These pulses are…

  16. Status and progress of studies on the nature of speech, instrumentation for its investigation and practical applications

    NASA Astrophysics Data System (ADS)

    Liberman, A. M.

    1983-09-01

    This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. Manuscripts cover the following topics: The association between comprehension of spoken sentences and early reading ability: The role of phonetic representation; Phonetic coding and order memory in relation to reading proficiency: A comparison of short-term memory for temporal and spatial order information; Exploring the oral and written language errors made by language disabled children; Perceiving phonetic events; Converging evidence in support of common dynamical principles for speech and movement coordination; Phase transitions and critical behavior in human bimanual coordination; Timing and coarticulation for alveolo-palatals and sequences of alveolar +J in Catalan; V-to-C coarticulation in Catalan VCV sequences: An articulatory and acoustical study; Prosody and the /S/-/c/ distinction; Intersections of tone and intonation in Thai; Simultaneous measurements of vowels produced by a hearing-impaired speaker; Extending format transitions may not improve aphasics' perception of stop consonant place of articulation; Against a role of chirp identification in duplex perception; Further evidence for the role of relative timing in speech: A reply to Barry; Review (Phonological intervention: Concepts and procedures); and Review (Temporal variables in speech).

  17. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  18. Finding the music of speech: Musical knowledge influences pitch processing in speech.

    PubMed

    Vanden Bosch der Nederlanden, Christina M; Hannon, Erin E; Snyder, Joel S

    2015-10-01

    Few studies comparing music and language processing have adequately controlled for low-level acoustical differences, making it unclear whether differences in music and language processing arise from domain-specific knowledge, acoustic characteristics, or both. We controlled acoustic characteristics by using the speech-to-song illusion, which often results in a perceptual transformation to song after several repetitions of an utterance. Participants performed a same-different pitch discrimination task for the initial repetition (heard as speech) and the final repetition (heard as song). Better detection was observed for pitch changes that violated rather than conformed to Western musical scale structure, but only when utterances transformed to song, indicating that music-specific pitch representations were activated and influenced perception. This shows that music-specific processes can be activated when an utterance is heard as song, suggesting that the high-level status of a stimulus as either language or music can be behaviorally dissociated from low-level acoustic factors. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Talker-specific learning in amnesia: Insight into mechanisms of adaptive speech perception

    PubMed Central

    Trude, Alison M.; Duff, Melissa C.; Brown-Schmidt, Sarah

    2014-01-01

    A hallmark of human speech perception is the ability to comprehend speech quickly and effortlessly despite enormous variability across talkers. However, current theories of speech perception do not make specific claims about the memory mechanisms involved in this process. To examine whether declarative memory is necessary for talker-specific learning, we tested the ability of amnesic patients with severe declarative memory deficits to learn and distinguish the accents of two unfamiliar talkers by monitoring their eye-gaze as they followed spoken instructions. Analyses of the time-course of eye fixations showed that amnesic patients rapidly learned to distinguish these accents and tailored perceptual processes to the voice of each talker. These results demonstrate that declarative memory is not necessary for this ability and points to the involvement of non-declarative memory mechanisms. These results are consistent with findings that other social and accommodative behaviors are preserved in amnesia and contribute to our understanding of the interactions of multiple memory systems in the use and understanding of spoken language. PMID:24657480

  20. Working with culturally and linguistically diverse students and their families: perceptions and practices of school speech-language therapists in the United States.

    PubMed

    Maul, Christine A

    2015-01-01

    Speech and language therapists (SLTs) working in schools worldwide strive to deliver evidence-based services to diverse populations of students. Many suggestions have been made in the international professional literature regarding culturally competent delivery of speech and language services, but there has been limited qualitative investigation of practices school SLTs find to be most useful when modifying their approaches to meet the needs of culturally and linguistically diverse (CLD) students. To examine perceptions of nine school SLTs regarding modifications of usual practices when interacting with CLD students and their families; to compare reported practices with those suggested in professional literature; to draw clinical implications regarding the results; and to suggest future research to build a more extensive evidence base for culturally competent service delivery. For this qualitative research study, nine school SLTs in a diverse region of the USA were recruited to participate in a semi-structured interview designed to answer the question: What dominant themes, if any, can be found in SLTs' descriptions of how they modify their approaches, if at all, when interacting with CLD students and their family members? Analysis of data revealed the following themes: (1) language-a barrier and a bridge, (2) communicating through interpreters, (3) respect for cultural differences, and (4) positive experiences interacting with CLD family members. Participants reported making many modifications to their usual approaches that have been recommended as best practices in the international literature. However, some practices the SLTs reported to be effective were not emphasized or were not addressed at all in the literature. Practical implications of results are drawn and future research is suggested. © 2015 Royal College of Speech and Language Therapists.

  1. A comparison of speech intonation production and perception abilities of Farsi speaking cochlear implanted and normal hearing children.

    PubMed

    Moein, Narges; Khoddami, Seyyedeh Maryam; Shahbodaghi, Mohammad Rahim

    2017-10-01

    Cochlear implant prosthesis facilitates spoken language development and speech comprehension in children with severe-profound hearing loss. However, this prosthesis is limited in encoding information about fundamental frequency and pitch that are essentially for recognition of speech prosody. The purpose of the present study is to investigate the perception and production of intonation in cochlear implant children and comparison with normal hearing children. This study carried out on 25 cochlear implanted children and 50 children with normal hearing. First, using 10 action pictures statements and questions sentences were extracted. Fundamental frequency and pitch changes were identified using Praat software. Then, these sentences were judged by 7 adult listeners. In second stage 20 sentences were played for child and he/she determined whether it was in a question form or statement one. Performance of cochlear implanted children in perception and production of intonation was significantly lower than children with normal hearing. The difference between fundamental frequency and pitch changes in cochlear implanted children and children with normal hearing was significant (P < 0/05). Cochlear implanted children performance in perception and production of intonation has significant correlation with child's age surgery and duration of prosthesis use (P < 0/05). The findings of the current study show that cochlear prostheses have limited application in facilitating the perception and production of intonation in cochlear implanted children. It should be noted that the child's age at the surgery and duration of prosthesis's use is important in reduction of this limitation. According to these findings, speech and language pathologists should consider intervention of intonation in treatment program of cochlear implanted children. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. The contribution of the cerebellum to speech production and speech perception: clinical and functional imaging data.

    PubMed

    Ackermann, Hermann; Mathiak, Klaus; Riecker, Axel

    2007-01-01

    A classical tenet of clinical neurology proposes that cerebellar disorders may give rise to speech motor disorders (ataxic dysarthria), but spare perceptual and cognitive aspects of verbal communication. During the past two decades, however, a variety of higher-order deficits of speech production, e.g., more or less exclusive agrammatism, amnesic or transcortical motor aphasia, have been noted in patients with vascular cerebellar lesions, and transient mutism following resection of posterior fossa tumors in children may develop into similar constellations. Perfusion studies provided evidence for cerebello-cerebral diaschisis as a possible pathomechanism in these instances. Tight functional connectivity between the language-dominant frontal lobe and the contralateral cerebellar hemisphere represents a prerequisite of such long-distance effects. Recent functional imaging data point at a contribution of the right cerebellar hemisphere, concomitant with language-dominant dorsolateral and medial frontal areas, to the temporal organization of a prearticulatory verbal code ('inner speech'), in terms of the sequencing of syllable strings at a speaker's habitual speech rate. Besides motor control, this network also appears to be engaged in executive functions, e.g., subvocal rehearsal mechanisms of verbal working memory, and seems to be recruited during distinct speech perception tasks. Taken together, thus, a prearticulatory verbal code bound to reciprocal right cerebellar/left frontal interactions might represent a common platform for a variety of cerebellar engagements in cognitive functions. The distinct computational operation provided by cerebellar structures within this framework appears to be the concatenation of syllable strings into coarticulated sequences.

  3. Result on speech perception after conversion from Spectra® to Freedom®.

    PubMed

    Magalhães, Ana Tereza de Matos; Goffi-Gomez, Maria Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens

    2012-04-01

    New technology in the Freedom® speech processor for cochlear implants was developed to improve how incoming acoustic sound is processed; this applies not only for new users, but also for previous generations of cochlear implants. To identify the contribution of this technology-- the Nucleus 22®--on speech perception tests in silence and in noise, and on audiometric thresholds. A cross-sectional cohort study was undertaken. Seventeen patients were selected. The last map based on the Spectra® was revised and optimized before starting the tests. Troubleshooting was used to identify malfunction. To identify the contribution of the Freedom® technology for the Nucleus22®, auditory thresholds and speech perception tests were performed in free field in sound-proof booths. Recorded monosyllables and sentences in silence and in noise (SNR = 0dB) were presented at 60 dBSPL. The nonparametric Wilcoxon test for paired data was used to compare groups. Freedom® applied for the Nucleus22® showed a statistically significant difference in all speech perception tests and audiometric thresholds. The Freedom® technology improved the performance of speech perception and audiometric thresholds of patients with Nucleus 22®.

  4. A Cross-Linguistic Study of the Development of Gesture and Speech in Zulu and French Oral Narratives

    ERIC Educational Resources Information Center

    Nicolas, Ramona Kunene; Guidetti, Michele; Colletta, Jean-Marc

    2017-01-01

    The present study reports on a developmental and cross-linguistic study of oral narratives produced by speakers of Zulu (a Bantu language) and French (a Romance language). Specifically, we focus on oral narrative performance as a bimodal (i.e., linguistic and gestural) behaviour during the late language acquisition phase. We analyzed seventy-two…

  5. Speech perception skills of deaf infants following cochlear implantation: a first report

    PubMed Central

    Houston, Derek M.; Pisoni, David B.; Kirk, Karen Iler; Ying, Elizabeth A.; Miyamoto, Richard T.

    2012-01-01

    Summary Objective We adapted a behavioral procedure that has been used extensively with normal-hearing (NH) infants, the visual habituation (VH) procedure, to assess deaf infants’ discrimination and attention to speech. Methods Twenty-four NH 6-month-olds, 24 NH 9-month-olds, and 16 deaf infants at various ages before and following cochlear implantation (CI) were tested in a sound booth on their caregiver’s lap in front of a TV monitor. During the habituation phase, each infant was presented with a repeating speech sound (e.g. ‘hop hop hop’) paired with a visual display of a checkerboard pattern on half of the trials (‘sound trials’) and only the visual display on the other half (‘silent trials’). When the infant’s looking time decreased and reached a habituation criterion, a test phase began. This consisted of two trials: an ‘old trial’ that was identical to the ‘sound trials’ and a ‘novel trial’ that consisted of a different repeating speech sound (e.g. ‘ahhh’) paired with the same checkerboard pattern. Results During the habituation phase, NH infants looked significantly longer during the sound trials than during the silent trials. However, deaf infants who had received cochlear implants (CIs) displayed a much weaker preference for the sound trials. On the other hand, both NH infants and deaf infants with CIs attended significantly longer to the visual display during the novel trial than during the old trial, suggesting that they were able to discriminate the speech patterns. Before receiving CIs, deaf infants did not show any preferences. Conclusions Taken together, the findings suggest that deaf infants who receive CIs are able to detect and discriminate some speech patterns. However, their overall attention to speech sounds may be less than NH infants’. Attention to speech may impact other aspects of speech perception and spoken language development, such as segmenting words from fluent speech and learning novel words. Implications of the effects of early auditory deprivation and age at CI on speech perception and language development are discussed. PMID:12697350

  6. Perceptions of parents and speech and language therapists on the effects of paediatric cochlear implantation and habilitation and education following it.

    PubMed

    Huttunen, Kerttu; Välimaa, Taina

    2012-01-01

    During the process of implantation, parents may have rather heterogeneous expectations and concerns about their child's development and the functioning of habilitation and education services. Their views on habilitation and education are important for building family-centred practices. We explored the perceptions of parents and speech and language therapists (SLTs) on the effects of implantation on the child and the family and on the quality of services provided. Their views were also compared. Parents and SLTs of 18 children filled out questionnaires containing open- and closed-ended questions at 6 months and annually 1-5 years after activation of the implant. Their responses were analysed mainly using data-based inductive content analysis. Positive experiences outnumbered negative ones in the responses of both the parents and the SLTs surveyed. The parents were particularly satisfied with the improvement in communication and expanded social life in the family. These were the most prevalent themes also raised by the SLTs. The parents were also satisfied with the organization and content of habilitation. Most of the negative experiences were related to arrangement of hospital visits and the usability and maintenance of speech processor technology. Some children did not receive enough speech and language therapy, and some of the parents were dissatisfied with educational services. The habilitation process had generally required parental efforts at an expected level. However, parents with a child with at least one concomitant problem experienced habilitation as more stressful than did other parents. Parents and SLTs had more positive than negative experiences with implantation. As the usability and maintenance of speech processor technology were often compromised, we urge implant centres to ensure sufficient personnel for technical maintenance. It is also important to promote services by providing enough information and parental support. © 2011 Royal College of Speech & Language Therapists.

  7. A common functional neural network for overt production of speech and gesture.

    PubMed

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Liberated Learning: Analysis of University Students' Perceptions and Experiences with Continuous Automated Speech Recognition

    ERIC Educational Resources Information Center

    Ryba, Ken; McIvor, Tom; Shakir, Maha; Paez, Di

    2006-01-01

    This study examined continuous automated speech recognition in the university lecture theatre. The participants were both native speakers of English (L1) and English as a second language students (L2) enrolled in an information systems course (Total N=160). After an initial training period, an L2 lecturer in information systems delivered three…

  9. Lexical and sublexical units in speech perception.

    PubMed

    Giroux, Ibrahima; Rey, Arnaud

    2009-03-01

    Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes. Copyright © 2009, Cognitive Science Society, Inc.

  10. Population Estimates, Health Care Characteristics, and Material Hardship Experiences of U.S. Children With Parent-Reported Speech-Language Difficulties: Evidence From Three Nationally Representative Surveys.

    PubMed

    Sonik, Rajan A; Parish, Susan L; Akobirshoev, Ilhom; Son, Esther; Rosenthal, Eliana

    2017-10-05

    To provide estimates for the prevalence of parent-reported speech-language difficulties in U.S. children, and to describe the levels of health care access and material hardship in this population. We tabulated descriptive and bivariate statistics using cross-sectional data from the 2007 and 2011/2012 iterations of the National Survey of Children's Health, the 2005/2006 and 2009/2010 iterations of the National Survey of Children with Special Health Care Needs, and the 2004 and 2008 panels of the Survey of Income and Program Participation. Prevalence estimates ranged from 1.8% to 5.0%, with data from two of the three surveys preliminarily indicating increased prevalence in recent years. The largest health care challenge was in accessing care coordination, with 49%-56% of children with parent-reported speech-language difficulties lacking full access. Children with parent-reported speech-language difficulties were more likely than peers without any indications of speech-language difficulties to live in households experiencing each measured material hardship and participating in each measured public benefit program (e.g., 20%-22% experiencing food insecurity, compared to 11%-14% of their peers without any indications of speech-language difficulties). We found mixed preliminary evidence to suggest that the prevalence of parent-reported speech-language difficulties among children may be rising. These children face heightened levels of material hardship and barriers in accessing health care.

  11. Integrating Articulatory Constraints into Models of Second Language Phonological Acquisition

    ERIC Educational Resources Information Center

    Colantoni, Laura; Steele, Jeffrey

    2008-01-01

    Models such as Eckman's markedness differential hypothesis, Flege's speech learning model, and Brown's feature-based theory of perception seek to explain and predict the relative difficulty second language (L2) learners face when acquiring new or similar sounds. In this paper, we test their predictive adequacy as concerns native English speakers'…

  12. Classroom Noise and Children Learning through a Second Language: Double Jeopardy?

    ERIC Educational Resources Information Center

    Nelson, Peggy; Kohnert, Kathryn; Sabur, Sabina; Shaw, Daniel

    2005-01-01

    Purpose: Two studies were conducted to investigate the effects of classroom noise on attention and speech perception in native Spanish-speaking second graders learning English as their second language (L2) as compared to English-only-speaking (EO) peers. Method: Study 1 measured children's on-task behavior during instructional activities with and…

  13. Phonetic convergence in spontaneous conversations as a function of interlocutor language distance

    PubMed Central

    Kim, Midam; Horton, William S.; Bradlow, Ann R.

    2013-01-01

    This study explores phonetic convergence during conversations between pairs of talkers with varying language distance. Specifically, we examined conversations within two native English talkers and within two native Korean talkers who had either the same or different regional dialects, and between native and nonnative talkers of English. To measure phonetic convergence, an independent group of listeners judged the similarity of utterance samples from each talker through an XAB perception test, in which X was a sample of one talker’s speech and A and B were samples from the other talker at either early or late portions of the conversation. The results showed greater convergence for same-dialect pairs than for either the different-dialect pairs or the different-L1 pairs. These results generally support the hypothesis that there is a relationship between phonetic convergence and interlocutor language distance. We interpret this pattern as suggesting that phonetic convergence between talker pairs that vary in the degree of their initial language alignment may be dynamically mediated by two parallel mechanisms: the need for intelligibility and the extra demands of nonnative speech production and perception. PMID:23637712

  14. Effects of Parental Deafness and Early Exposure to Manual Communication on the Cognitive Skills, English Language Skill, and Field Independence of Young Deaf Adults.

    ERIC Educational Resources Information Center

    Parasnis, Ila

    1983-01-01

    Differential effects of parental deafness and early exposure to manual communication were not observed in the cognitive and communication performance of the 38 experimental subjects. Furthermore, the Delayed sign language group performed significantly better than the early American Sign Language group on tests of speech perception and speech…

  15. The Emergence of Productive Speech and Language in Spanish-Learning Paediatric Cochlear Implant Users

    ERIC Educational Resources Information Center

    Moreno-Torres, Ignacio

    2014-01-01

    It has been proposed that cochlear implant users may develop robust categorical perception skills, but that they show limited precision in perception. This article explores if a parallel contrast is observable in production, and if, despite acquiring typical linguistic representations, their early words are inconsistent. The participants were…

  16. Parent Perceptions of the Impact of Stuttering on Their Preschoolers and Themselves

    ERIC Educational Resources Information Center

    Langevin, Marilyn; Packman, Ann; Onslow, Mark

    2010-01-01

    Speech-language pathologists (SLPs) are advised to consider the distress of preschoolers and parents along with the social consequences of the child's stuttering when deciding whether to begin or delay treatment. Seventy-seven parents completed a survey that yielded quantitative and qualitative data that reflected their perceptions of the impact…

  17. The Separability of Morphological Processes from Semantic Meaning and Syntactic Class in Production of Single Words: Evidence from the Hebrew Root Morpheme.

    PubMed

    Deutsch, Avital

    2016-02-01

    In the present study we investigated to what extent the morphological facilitation effect induced by the derivational root morpheme in Hebrew is independent of semantic meaning and grammatical information of the part of speech involved. Using the picture-word interference paradigm with auditorily presented distractors, Experiment 1 compared the facilitation effect induced by semantically transparent versus semantically opaque morphologically related distractor words (i.e., a shared root) on the production latency of bare nouns. The results revealed almost the same amount of facilitation for both relatedness conditions. These findings accord with the results of the few studies that have addressed this issue in production in Indo-European languages, as well as previous studies in written word perception in Hebrew. Experiment 2 compared the root's facilitation effect, induced by morphologically related nominal versus verbal distractors, on the production latency of bare nouns. The results revealed a facilitation effect of similar size induced by the shared root, regardless of the distractor's part of speech. It is suggested that the principle that governs lexical organization at the level of morphology, at least for Hebrew roots, is form-driven and independent of semantic meaning. This principle of organization crosses the linguistic domains of production and written word perception, as well as grammatical organization according to part of speech.

  18. Learning to perceive and recognize a second language: the L2LP model revised.

    PubMed

    van Leussen, Jan-Willem; Escudero, Paola

    2015-01-01

    We present a test of a revised version of the Second Language Linguistic Perception (L2LP) model, a computational model of the acquisition of second language (L2) speech perception and recognition. The model draws on phonetic, phonological, and psycholinguistic constructs to explain a number of L2 learning scenarios. However, a recent computational implementation failed to validate a theoretical proposal for a learning scenario where the L2 has less phonemic categories than the native language (L1) along a given acoustic continuum. According to the L2LP, learners faced with this learning scenario must not only shift their old L1 phoneme boundaries but also reduce the number of categories employed in perception. Our proposed revision to L2LP successfully accounts for this updating in the number of perceptual categories as a process driven by the meaning of lexical items, rather than by the learners' awareness of the number and type of phonemes that are relevant in their new language, as the previous version of L2LP assumed. Results of our simulations show that meaning-driven learning correctly predicts the developmental path of L2 phoneme perception seen in empirical studies. Additionally, and to contribute to a long-standing debate in psycholinguistics, we test two versions of the model, with the stages of phonemic perception and lexical recognition being either sequential or interactive. Both versions succeed in learning to recognize minimal pairs in the new L2, but make diverging predictions on learners' resulting phonological representations. In sum, the proposed revision to the L2LP model contributes to our understanding of L2 acquisition, with implications for speech processing in general.

  19. The Relation Between Child Versus Parent Report of Chronic Fatigue and Language/Literacy Skills in School-Age Children with Cochlear Implants.

    PubMed

    Werfel, Krystal L; Hendricks, Alison Eisel

    2016-01-01

    Preliminary evidence suggests that children with hearing loss experience elevated levels of chronic fatigue compared with children with normal hearing. Chronic fatigue is associated with decreased academic performance in many clinical populations. Children with cochlear implants as a group exhibit deficits in language and literacy skills; however, the relation between chronic fatigue and language and literacy skills for children with cochlear implants is unclear. The purpose of this study was to explore subjective ratings of chronic fatigue by children with cochlear implants and their parents, as well as the relation between chronic fatigue and language and literacy skills in this population. Nineteen children with cochlear implants in grades 3 to 6 and one of their parents separately completed a subjective chronic fatigue scale, on which they rated how much the child experienced physical, sleep/rest, and cognitive fatigue over the past month. In addition, children completed an assessment battery that included measures of speech perception, oral language, word reading, and spelling. Children and parents reported different levels of chronic child physical and sleep/rest fatigue. In both cases, parents reported significantly less fatigue than did children. Children and parents did not report different levels of chronic child cognitive fatigue. Child report of physical fatigue was related to speech perception, language, reading, and spelling. Child report of sleep/rest and cognitive fatigue was related to speech perception and language but not to reading or spelling. Parent report of child fatigue was not related to children's language and literacy skills. Taken as a whole, results suggested that parents under-estimate the fatigue experienced by children with cochlear implants. Child report of physical fatigue was robustly related to language and literacy skills. Children with cochlear implants are likely more accurate at reporting physical fatigue than cognitive fatigue. Clinical practice should take fatigue into account when developing treatment plans for children with cochlear implants, and research should continue to develop a comprehensive model of fatigue in children with cochlear implants.

  20. Auditory scene analysis in school-aged children with developmental language disorders

    PubMed Central

    Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.

    2014-01-01

    Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430

  1. A cross-linguistic fMRI study of perception of intonation and emotion in Chinese.

    PubMed

    Gandour, Jack; Wong, Donald; Dzemidzic, Mario; Lowe, Mark; Tong, Yunxia; Li, Xiaojian

    2003-03-01

    Conflicting data from neurobehavioral studies of the perception of intonation (linguistic) and emotion (affective) in spoken language highlight the need to further examine how functional attributes of prosodic stimuli are related to hemispheric differences in processing capacity. Because of similarities in their acoustic profiles, intonation and emotion permit us to assess to what extent hemispheric lateralization of speech prosody depends on functional instead of acoustical properties. To examine how the brain processes linguistic and affective prosody, an fMRI study was conducted using Chinese, a tone language in which both intonation and emotion may be signaled prosodically, in addition to lexical tones. Ten Chinese and 10 English subjects were asked to perform discrimination judgments of intonation (I: statement, question) and emotion (E: happy, angry, sad) presented in semantically neutral Chinese sentences. A baseline task required passive listening to the same speech stimuli (S). In direct between-group comparisons, the Chinese group showed left-sided frontoparietal activation for both intonation (I vs. S) and emotion (E vs. S) relative to baseline. When comparing intonation relative to emotion (I vs. E), the Chinese group demonstrated prefrontal activation bilaterally; parietal activation in the left hemisphere only. The reverse comparison (E vs. I), on the other hand, revealed that activation occurred in anterior and posterior prefrontal regions of the right hemisphere only. These findings show that some aspects of perceptual processing of emotion are dissociable from intonation, and, moreover, that they are mediated by the right hemisphere. Copyright 2003 Wiley-Liss, Inc.

  2. Evaluation of central auditory processing in children with Specific Language Impairment.

    PubMed

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Piłka, Adam; Skarżyński, Henryk

    2015-01-01

    Specific Language Impairment (SLI) affects about 7-15 % of children of school age and according to the currently accepted diagnostic criteria, it is presumed that these children do not suffer from hearing impairment. The goal of this work was to assess anomalies of central auditory processes in a group of children diagnosed with specific language impairment. Material consisted of 200 children aged 7-10 years (100 children in the study group and 100 hundred in the control group). Selected psychoacoustic tests (Frequency Pattern Test - FPT, Duration Pattern Test - DPT, Dichotic Digit Test - DDT, Time Compressed Sentence Test - CST, Gap Detection Test - GDT) were performed in all children. Results were subject to statistical analysis. It was observed that mean results obtained in individual age groups in the study group are significantly lower than in the control group. Based on the conducted studies we may conclude that children with SLI suffer from disorders of some higher auditory functions, which substantiates the diagnosis of hearing disorders according to the AHSA (American Hearing and Speech Association) guidelines. Use of sound-based, not verbal tests, eliminates the probability that observed problems with perception involve only perception of speech, therefore do not signify central hearing disorders, but problems with understanding of speech. Lack of literature data on the significance of FPT, DPT, DDT, CST and GDT tests in children with specific language impairment precludes comparison of acquired results and makes them unique.

  3. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  4. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  5. Cochlear implants and spoken language processing abilities: review and assessment of the literature.

    PubMed

    Peterson, Nathaniel R; Pisoni, David B; Miyamoto, Richard T

    2010-01-01

    Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading). However, there is wide variation in individual outcomes following cochlear implantation, and some CI recipients never develop useable speech and oral language skills. The causes of this enormous variation in outcomes are only partly understood at the present time. The variables most strongly associated with language outcomes are age at implantation and mode of communication in rehabilitation. Thus, some of the more important factors determining success of cochlear implantation are broadly related to neural plasticity that appears to be transiently present in deaf individuals. In this article we review the expected outcomes of cochlear implantation, potential predictors of those outcomes, the basic science regarding critical and sensitive periods, and several new research directions in the field of cochlear implantation.

  6. Bilingual Language Assessment: Contemporary Versus Recommended Practice in American Schools.

    PubMed

    Arias, Graciela; Friberg, Jennifer

    2017-01-01

    The purpose of this study was to identify current practices of school-based speech-language pathologists (SLPs) in the United States for bilingual language assessment and compare them to American Speech-Language-Hearing Association (ASHA) best practice guidelines and mandates of the Individuals with Disabilities Education Act (IDEA, 2004). The study was modeled to replicate portions of Caesar and Kohler's (2007) study and expanded to include a nationally representative sample. A total of 166 respondents completed an electronic survey. Results indicated that the majority of respondents have performed bilingual language assessments. Furthermore, the most frequently used informal and standardized assessments were identified. SLPs identified supports, and barriers to assessment, as well as their perceptions of graduate preparation. The findings of this study demonstrated that although SLPs have become more compliant to ASHA and IDEA guidelines, there is room for improvement in terms of adequate training in bilingual language assessment.

  7. Which Language R You Speaking? /r/ as a Language Marker in Tyrolean and Italian Bilinguals.

    PubMed

    Kaland, Constantijn; Galatà, Vincenzo; Spreafico, Lorenzo; Vietti, Alessandro

    2017-12-01

    Across languages of the world the /r/ sound is known for its variability. This variability has been investigated using articulatory models as well as in sociolinguistic studies. The current study investigates to what extent /r/ is a marker of a bilingual's dominant language. To this end, a reading task was carried out by bilingual speakers from South Tyrol, who produce /r/ differently according to whether they dominantly speak Tyrolean or Italian. The recorded reading data were subsequently used in a perception experiment to investigate whether South Tyrolean bilingual listeners are able to identify the dominant language of the speaker. Results indicate that listeners use /r/ as a cue to determine the dominant language of the speaker whilst relying on articulatory distinctions between the variants. It is furthermore shown that /r/ correlates with three interdependent variables: the sociolinguistic background of the speakers, their speech production, and how their speech is perceived.

  8. Population Estimates, Health Care Characteristics, and Material Hardship Experiences of U.S. Children with Parent-Reported Speech-Language Difficulties: Evidence from Three Nationally Representative Surveys

    ERIC Educational Resources Information Center

    Sonik, Rajan A.; Parish, Susan L.; Akorbirshoev, Ilhom; Son, Esther; Rosenthal, Eliana

    2014-01-01

    Purpose: To provide estimates for the prevalence of parent-reported speech-language difficulties in U.S. children, and to describe the levels of health care access and material hardship in this population. Method: We tabulated descriptive and bivariate statistics using cross-sectional data from the 2007 and 2011/2012 iterations of the National…

  9. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    PubMed Central

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160

  10. Assessing Music Perception in Young Children: Evidence for and Psychometric Features of the M-Factor.

    PubMed

    Barros, Caio G; Swardfager, Walter; Moreno, Sylvain; Bortz, Graziela; Ilari, Beatriz; Jackowski, Andrea P; Ploubidis, George; Little, Todd D; Lamont, Alexandra; Cogo-Moreira, Hugo

    2017-01-01

    Given the relationship between language acquisition and music processing, musical perception (MP) skills have been proposed as a tool for early diagnosis of speech and language difficulties; therefore, a psychometric instrument is needed to assess music perception in children under 10 years of age, a crucial period in neurodevelopment. We created a set of 80 musical stimuli encompassing seven domains of music perception to inform perception of tonal, atonal, and modal stimuli, in a random sample of 1006 children, 6-13 years of age, equally distributed from first to fifth grades, from 14 schools (38% private schools) in So Paulo State. The underlying model was tested using confirmatory factor analysis. A model encompassing seven orthogonal specific domains (contour, loudness, scale, timbre, duration, pitch, and meter) and one general music perception factor, the "m-factor," showed excellent fit indices. The m-factor, previously hypothesized in the literature but never formally tested, explains 93% of the reliable variance in measurement, while only 3.9% of the reliable variance could be attributed to the multidimensionality caused by the specific domains. The 80 items showed no differential item functioning based on sex, age, or enrolment in public vs. private school, demonstrating the important psychometric feature of invariance. Like Charles Spearman's g-factor of intelligence, the m-factor is robust and reliable. It provides a convenient measure of auditory stimulus apprehension that does not rely on verbal information, offering a new opportunity to probe biological and psychological relationships with music perception phenomena and the etiologies of speech and language disorders.

  11. Assessing Music Perception in Young Children: Evidence for and Psychometric Features of the M-Factor

    PubMed Central

    Barros, Caio G.; Swardfager, Walter; Moreno, Sylvain; Bortz, Graziela; Ilari, Beatriz; Jackowski, Andrea P.; Ploubidis, George; Little, Todd D.; Lamont, Alexandra; Cogo-Moreira, Hugo

    2017-01-01

    Given the relationship between language acquisition and music processing, musical perception (MP) skills have been proposed as a tool for early diagnosis of speech and language difficulties; therefore, a psychometric instrument is needed to assess music perception in children under 10 years of age, a crucial period in neurodevelopment. We created a set of 80 musical stimuli encompassing seven domains of music perception to inform perception of tonal, atonal, and modal stimuli, in a random sample of 1006 children, 6–13 years of age, equally distributed from first to fifth grades, from 14 schools (38% private schools) in So Paulo State. The underlying model was tested using confirmatory factor analysis. A model encompassing seven orthogonal specific domains (contour, loudness, scale, timbre, duration, pitch, and meter) and one general music perception factor, the “m-factor,” showed excellent fit indices. The m-factor, previously hypothesized in the literature but never formally tested, explains 93% of the reliable variance in measurement, while only 3.9% of the reliable variance could be attributed to the multidimensionality caused by the specific domains. The 80 items showed no differential item functioning based on sex, age, or enrolment in public vs. private school, demonstrating the important psychometric feature of invariance. Like Charles Spearman's g-factor of intelligence, the m-factor is robust and reliable. It provides a convenient measure of auditory stimulus apprehension that does not rely on verbal information, offering a new opportunity to probe biological and psychological relationships with music perception phenomena and the etiologies of speech and language disorders. PMID:28174518

  12. Evaluation of Speech Perception via the Use of Hearing Loops and Telecoils

    PubMed Central

    Holmes, Alice E.; Kricos, Patricia B.; Gaeta, Laura; Martin, Sheridan

    2015-01-01

    A cross-sectional, experimental, and randomized repeated-measures design study was used to examine the objective and subjective value of telecoil and hearing loop systems. Word recognition and speech perception were tested in 12 older adult hearing aid users using the telecoil and microphone inputs in quiet and noise conditions. Participants were asked to subjectively rate cognitive listening effort and self-confidence for each condition. Significant improvement in speech perception with the telecoil over microphone input in both quiet and noise was found along with significantly less reported cognitive listening effort and high self-confidence. The use of telecoils with hearing aids should be recommended for older adults with hearing loss. PMID:28138458

  13. General perceptual contributions to lexical tone normalization.

    PubMed

    Huang, Jingyuan; Holt, Lori L

    2009-06-01

    Within tone languages that use pitch variations to contrast meaning, large variability exists in the pitches produced by different speakers. Context-dependent perception may help to resolve this perceptual challenge. However, whether speakers rely on context in contour tone perception is unclear; previous studies have produced inconsistent results. The present study aimed to provide an unambiguous test of the effect of context on contour lexical tone perception and to explore its underlying mechanisms. In three experiments, Mandarin listeners' perception of Mandarin first and second (high-level and mid-rising) tones was investigated with preceding speech and non-speech contexts. Results indicate that the mean fundamental frequency (f0) of a preceding sentence affects perception of contour lexical tones and the effect is contrastive. Following a sentence with a higher-frequency mean f0, the following syllable is more likely to be perceived as a lower frequency lexical tone and vice versa. Moreover, non-speech precursors modeling the mean spectrum of f0 also elicit this effect, suggesting general perceptual processing rather than articulatory-based or speaker-identity-driven mechanisms.

  14. The Role of Early Language Experience in the Development of Speech Perception and Phonological Processing Abilities: Evidence from 5-Year-Olds with Histories of Otitis Media with Effusion and Low Socioeconomic Status

    ERIC Educational Resources Information Center

    Nittrouer, Susan; Burton, Lisa Thuente

    2005-01-01

    This study tested the hypothesis that early language experience facilitates the development of language-specific perceptual weighting strategies believed to be critical for accessing phonetic structure. In turn, that structure allows for efficient storage and retrieval of words in verbal working memory, which is necessary for sentence…

  15. Parallel versus Serial Processing Dependencies in the Perisylvian Speech Network: A Granger Analysis of Intracranial EEG Data

    ERIC Educational Resources Information Center

    Gow, David W., Jr.; Keller, Corey J.; Eskandar, Emad; Meng, Nate; Cash, Sydney S.

    2009-01-01

    In this work, we apply Granger causality analysis to high spatiotemporal resolution intracranial EEG (iEEG) data to examine how different components of the left perisylvian language network interact during spoken language perception. The specific focus is on the characterization of serial versus parallel processing dependencies in the dominant…

  16. Word Boundaries in L2 Speech: Evidence from Polish Learners of English

    ERIC Educational Resources Information Center

    Schwartz, Geoffrey

    2016-01-01

    Acoustic and perceptual studies investgate B2-level Polish learners' acquisition of second language (L2) English word-boundaries involving word-initial vowels. In production, participants were less likely to produce glottalization of phrase-medial initial vowels in L2 English than in first language (L1) Polish. Perception studies employing word…

  17. Methodological Adaptations for Investigating the Perceptions of Language-Impaired Adolescents Regarding the Relative Importance of Selected Communication Skills

    ERIC Educational Resources Information Center

    Reed, Vicki A.; Brammall, Helen

    2006-01-01

    This article describes the systematic and detailed processes undertaken to modify a research methodology for use with language-impaired adolescents. The original methodology had been used previously with normally achieving adolescents and speech pathologists to obtain their opinions about the relative importance of selected communication skills…

  18. Age of Acquisition and Proficiency in a Second Language Independently Influence the Perception of Non-Native Speech

    ERIC Educational Resources Information Center

    Archila-Suerte, Pilar; Zevin, Jason; Bunta, Ferenc; Hernandez, Arturo E.

    2012-01-01

    Sensorimotor processing in children and higher-cognitive processing in adults could determine how non-native phonemes are acquired. This study investigates how age-of-acquisition (AOA) and proficiency-level (PL) predict native-like perception of statistically dissociated L2 categories, i.e., within-category and between-category. In a similarity…

  19. The Interplay between Input and Initial Biases: Asymmetries in Vowel Perception during the First Year of Life

    ERIC Educational Resources Information Center

    Pons, Ferran; Albareda-Castellot, Barbara; Sebastian-Galles, Nuria

    2012-01-01

    Vowels with extreme articulatory-acoustic properties act as natural referents. Infant perceptual asymmetries point to an underlying bias favoring these referent vowels. However, as language experience is gathered, distributional frequency of speech sounds could modify this initial bias. The perception of the /i/-/e/ contrast was explored in 144…

  20. Learning to Recognize Speakers of a Non-Native Language: Implications for the Functional Organization of Human Auditory Cortex

    ERIC Educational Resources Information Center

    Perrachione, Tyler K.; Wong, Patrick C. M.

    2007-01-01

    Brain imaging studies of voice perception often contrast activation from vocal and verbal tasks to identify regions uniquely involved in processing voice. However, such a strategy precludes detection of the functional relationship between speech and voice perception. In a pair of experiments involving identifying voices from native and foreign…

  1. Early and Late Spanish-English Bilingual Adults' Perception of American English Vowels

    ERIC Educational Resources Information Center

    Baigorri, Miriam

    2016-01-01

    Increasing numbers of Hispanic immigrants are entering the US (US Census Bureau, 2011) and are learning American English (AE) as a second language (L2). Many may experience difficulty in understanding AE. Accurate perception of AE vowels is important because vowels carry a large part of the speech signal (Kewley-Port, Burkle, & Lee, 2007). The…

  2. The early maximum likelihood estimation model of audiovisual integration in speech perception.

    PubMed

    Andersen, Tobias S

    2015-05-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.

  3. Cross-language differences in the brain network subserving intelligible speech.

    PubMed

    Ge, Jianqiao; Peng, Gang; Lyu, Bingjiang; Wang, Yi; Zhuo, Yan; Niu, Zhendong; Tan, Li Hai; Leff, Alexander P; Gao, Jia-Hong

    2015-03-10

    How is language processed in the brain by native speakers of different languages? Is there one brain system for all languages or are different languages subserved by different brain systems? The first view emphasizes commonality, whereas the second emphasizes specificity. We investigated the cortical dynamics involved in processing two very diverse languages: a tonal language (Chinese) and a nontonal language (English). We used functional MRI and dynamic causal modeling analysis to compute and compare brain network models exhaustively with all possible connections among nodes of language regions in temporal and frontal cortex and found that the information flow from the posterior to anterior portions of the temporal cortex was commonly shared by Chinese and English speakers during speech comprehension, whereas the inferior frontal gyrus received neural signals from the left posterior portion of the temporal cortex in English speakers and from the bilateral anterior portion of the temporal cortex in Chinese speakers. Our results revealed that, although speech processing is largely carried out in the common left hemisphere classical language areas (Broca's and Wernicke's areas) and anterior temporal cortex, speech comprehension across different language groups depends on how these brain regions interact with each other. Moreover, the right anterior temporal cortex, which is crucial for tone processing, is equally important as its left homolog, the left anterior temporal cortex, in modulating the cortical dynamics in tone language comprehension. The current study pinpoints the importance of the bilateral anterior temporal cortex in language comprehension that is downplayed or even ignored by popular contemporary models of speech comprehension.

  4. Cross-language differences in the brain network subserving intelligible speech

    PubMed Central

    Ge, Jianqiao; Peng, Gang; Lyu, Bingjiang; Wang, Yi; Zhuo, Yan; Niu, Zhendong; Tan, Li Hai; Leff, Alexander P.; Gao, Jia-Hong

    2015-01-01

    How is language processed in the brain by native speakers of different languages? Is there one brain system for all languages or are different languages subserved by different brain systems? The first view emphasizes commonality, whereas the second emphasizes specificity. We investigated the cortical dynamics involved in processing two very diverse languages: a tonal language (Chinese) and a nontonal language (English). We used functional MRI and dynamic causal modeling analysis to compute and compare brain network models exhaustively with all possible connections among nodes of language regions in temporal and frontal cortex and found that the information flow from the posterior to anterior portions of the temporal cortex was commonly shared by Chinese and English speakers during speech comprehension, whereas the inferior frontal gyrus received neural signals from the left posterior portion of the temporal cortex in English speakers and from the bilateral anterior portion of the temporal cortex in Chinese speakers. Our results revealed that, although speech processing is largely carried out in the common left hemisphere classical language areas (Broca’s and Wernicke’s areas) and anterior temporal cortex, speech comprehension across different language groups depends on how these brain regions interact with each other. Moreover, the right anterior temporal cortex, which is crucial for tone processing, is equally important as its left homolog, the left anterior temporal cortex, in modulating the cortical dynamics in tone language comprehension. The current study pinpoints the importance of the bilateral anterior temporal cortex in language comprehension that is downplayed or even ignored by popular contemporary models of speech comprehension. PMID:25713366

  5. Sociological effects on vocal aging: Age related F0 effects in two languages

    NASA Astrophysics Data System (ADS)

    Nagao, Kyoko

    2005-04-01

    Listeners can estimate the age of a speaker fairly accurately from their speech (Ptacek and Sander, 1966). It is generally considered that this perception is based on physiologically determined aspects of the speech. However, the degree to which it is due to conventional sociolinguistic aspects of speech is unknown. The current study examines the degree to which fundamental frequency (F0) changes due to advanced aging across two language groups of speakers. It also examines the degree to which the speakers associate these changes with aging in a voice disguising task. Thirty native speakers each of English and Japanese, taken from three age groups, read a target phrase embedded in a carrier sentence in their native language. Each speaker also read the sentence pretending to be 20-years younger or 20-years older than their own age. Preliminary analysis of eighteen Japanese speakers indicates that the mean and maximum F0 values increase when the speakers pretended to be younger than when they pretended to be older. Some previous studies on age perception, however, suggested that F0 has minor effects on listeners' age estimation. The acoustic results will also be discussed in conjunction with the results of the listeners' age estimation of the speakers.

  6. Listening Effort During Sentence Processing Is Increased for Non-native Listeners: A Pupillometry Study

    PubMed Central

    Borghini, Giulia; Hazan, Valerie

    2018-01-01

    Current evidence demonstrates that even though some non-native listeners can achieve native-like performance for speech perception tasks in quiet, the presence of a background noise is much more detrimental to speech intelligibility for non-native compared to native listeners. Even when performance is equated across groups, it is likely that greater listening effort is required for non-native listeners. Importantly, the added listening effort might result in increased fatigue and a reduced ability to successfully perform multiple tasks simultaneously. Task-evoked pupil responses have been demonstrated to be a reliable measure of cognitive effort and can be useful in clarifying those aspects. In this study we compared the pupil response for 23 native English speakers and 27 Italian speakers of English as a second language. Speech intelligibility was tested for sentences presented in quiet and in background noise at two performance levels that were matched across groups. Signal-to-noise levels corresponding to these sentence intelligibility levels were pre-determined using an adaptive intelligibility task. Pupil response was significantly greater in non-native compared to native participants across both intelligibility levels. Therefore, for a given intelligibility level, a greater listening effort is required when listening in a second language in order to understand speech in noise. Results also confirmed that pupil response is sensitive to speech intelligibility during language comprehension, in line with previous research. However, contrary to our predictions, pupil response was not differentially modulated by intelligibility levels for native and non-native listeners. The present study corroborates that pupillometry can be deemed as a valid measure to be used in speech perception investigation, because it is sensitive to differences both across participants, such as listener type, and across conditions, such as variations in the level of speech intelligibility. Importantly, pupillometry offers us the possibility to uncover differences in listening effort even when those do not emerge in the performance level of individuals. PMID:29593489

  7. Referred speech-language and hearing complaints in the western region of São Paulo, Brazil

    PubMed Central

    Samelli, Alessandra Giannella; Rondon, Silmara; Oliver, Fátima Correa; Junqueira, Simone Rennó; Molini-Avejonas, Daniela Regina

    2014-01-01

    OBJECTIVE: The aim of this study was to characterize the epidemiological profile of the population attending primary health care units in the western region of the city of São Paulo, Brazil, highlighting referred speech-language and hearing complaints. METHOD: This investigation was a cross-sectional observational study conducted in primary health care units. Household surveys were conducted and information was obtained from approximately 2602 individuals, including (but not limited to) data related to education, family income, health issues, access to public services and access to health services. The speech-language and hearing complaints were identified from specific questions. RESULTS: Our results revealed that the populations participating in the survey were heterogeneous in terms of their demographic and economic characteristics. The prevalence of referred speech-language and hearing complaints in this population was 10%, and only half the users of the public health system in the studied region who had complaints were monitored or received specific treatment. CONCLUSIONS: The results demonstrate the importance of using population surveys to identify speech-language and hearing complaints at the level of primary health care. Moreover, these findings highlight the need to reorganize the speech-language pathology and audiology service in the western region of São Paulo, as well as the need to improve the Family Health Strategy in areas that do not have a complete coverage, in order to expand and improve the territorial diagnostics and the speech-language pathology and audiology actions related to the prevention, identification, and rehabilitation of human communication disorders. PMID:24964306

  8. Audiophonological results after cochlear implantation in 40 congenitally deaf patients: preliminary results.

    PubMed

    Loundon, N; Busquet, D; Roger, G; Moatti, L; Garabedian, E N

    2000-11-30

    The aim of this study is to evaluate the prognostic factors of audiophonological results in cochlear implant in congenitally deaf patients. Between 1991 and 1996. 40 congenitally deaf children underwent cochlear implantation in our department, at an average age of 7 years (median: 5 years). The results of speech therapy were evaluated with a mean follow-up of 2 years and were classified according to four criteria: perception of sound, speech perception, speech production and the level of oral language. For each criterion, a score was established ranging from zero to four. These scores were weighted according to age such that the results before and after implantation only reflected the changes related to the implantation. The prognostic factors for good results were: a good level of oral communication before implantation, residual hearing, progressive deafness and implantation at a young age. On the other hand, poor prognostic factors were: the presence of behavioral disorders and poor communication skills prior to implantation. Overall, the major prognostic factor for a good outcome appeared to be the preoperative level of oral language, even if this was rudimentary.

  9. The influence of otitis media with effusion on speech and language development and psycho-intellectual behaviour of the preschool child--results of a cross-sectional study in 1,512 children.

    PubMed

    Van Cauwenberge, P; Van Cauwenberge, K; Kluyskens, P

    1985-01-01

    To investigate the influence of otitis media with effusion (OME) on the psychological, social and intellectual development of preschool children, a cross-sectional study in 1,512 apparently healthy children, aged 25-80 months, attending kindergarten (infant school) was performed. Tympanometry and evaluation of the various psychological, social and intellectual parameters by the infant school teacher (assisted by a sociologist) were the most important diagnostical tools in this study. It was demonstrated that OME had a negative influence on speech/language development, intelligence, attention at school, activity at school, manual skill and social behaviour of the 2 to 6 year old child. For speech and language the negative influence was most clearly demonstrated in the youngest age group (less than 47 months), for intelligence and activity in the older age groups. Early detection and appropriate treatment of OME are recommended to avoid these complications.

  10. Hand and mouth: Cortical correlates of lexical processing in British Sign Language and speechreading English

    PubMed Central

    Capek, Cheryl M.; Waters, Dafydd; Woll, Bencie; MacSweeney, Mairéad; Brammer, Michael J.; McGuire, Philip K.; David, Anthony S.; Campbell, Ruth

    2012-01-01

    Spoken languages use one set of articulators – the vocal tract, whereas signed languages use multiple articulators, including both manual and facial actions. How sensitive are the cortical circuits for language processing to the particular articulators that are observed? This question can only be addressed with participants who use both speech and a signed language. In this study, we used fMRI to compare the processing of speechreading and sign processing in deaf native signers of British Sign Language (BSL) who were also proficient speechreaders. The following questions were addressed: To what extent do these different language types rely on a common brain network? To what extent do the patterns of activation differ? How are these networks affected by the articulators that languages use? Common perisylvian regions were activated both for speechreading English words and for BSL signs. Distinctive activation was also observed reflecting the language form. Speechreading elicited greater activation in the left mid-superior temporal cortex than BSL, whereas BSL processing generated greater activation at the parieto-occipito-temporal junction in both hemispheres. We probed this distinction further within BSL, where manual signs can be accompanied by different sorts of mouth action. BSL signs with speech-like mouth actions showed greater superior temporal activation, while signs made with non-speech-like mouth actions showed more activation in posterior and inferior temporal regions. Distinct regions within the temporal cortex are not only differentially sensitive to perception of the distinctive articulators for speech and for sign, but also show sensitivity to the different articulators within the (signed) language. PMID:18284353

  11. The coarticulation/invariance scale: Mutual information as a measure of coarticulation resistance, motor synergy, and articulatory invariance

    PubMed Central

    Iskarous, Khalil; Mooshammer, Christine; Hoole, Phil; Recasens, Daniel; Shadle, Christine H.; Saltzman, Elliot; Whalen, D. H.

    2013-01-01

    Coarticulation and invariance are two topics at the center of theorizing about speech production and speech perception. In this paper, a quantitative scale is proposed that places coarticulation and invariance at the two ends of the scale. This scale is based on physical information flow in the articulatory signal, and uses Information Theory, especially the concept of mutual information, to quantify these central concepts of speech research. Mutual Information measures the amount of physical information shared across phonological units. In the proposed quantitative scale, coarticulation corresponds to greater and invariance to lesser information sharing. The measurement scale is tested by data from three languages: German, Catalan, and English. The relation between the proposed scale and several existing theories of coarticulation is discussed, and implications for existing theories of speech production and perception are presented. PMID:23927125

  12. Syllable Structure Universals and Native Language Interference in Second Language Perception and Production: Positional Asymmetry and Perceptual Links to Accentedness

    PubMed Central

    Cheng, Bing; Zhang, Yang

    2015-01-01

    The present study investigated how syllable structure differences between the first Language (L1) and the second language (L2) affect L2 consonant perception and production at syllable-initial and syllable-final positions. The participants were Mandarin-speaking college students who studied English as a second language. Monosyllabic English words were used in the perception test. Production was recorded from each Chinese subject and rated for accentedness by two native speakers of English. Consistent with previous studies, significant positional asymmetry effects were found across speech sound categories in terms of voicing, place of articulation, and manner of articulation. Furthermore, significant correlations between perception and accentedness ratings were found at the syllable onset position but not for the coda. Many exceptions were also found, which could not be solely accounted for by differences in L1–L2 syllabic structures. The results show a strong effect of language experience at the syllable level, which joins force with acoustic, phonetic, and phonemic properties of individual consonants in influencing positional asymmetry in both domains of L2 segmental perception and production. The complexities and exceptions call for further systematic studies on the interactions between syllable structure universals and native language interference with refined theoretical models to specify the links between perception and production in second language acquisition. PMID:26635699

  13. Interdependence of linguistic and indexical speech perception skills in school-age children with early cochlear implantation.

    PubMed

    Geers, Ann E; Davidson, Lisa S; Uchanski, Rosalie M; Nicholas, Johanna G

    2013-09-01

    This study documented the ability of experienced pediatric cochlear implant (CI) users to perceive linguistic properties (what is said) and indexical attributes (emotional intent and talker identity) of speech, and examined the extent to which linguistic (LSP) and indexical (ISP) perception skills are related. Preimplant-aided hearing, age at implantation, speech processor technology, CI-aided thresholds, sequential bilateral cochlear implantation, and academic integration with hearing age-mates were examined for their possible relationships to both LSP and ISP skills. Sixty 9- to 12-year olds, first implanted at an early age (12 to 38 months), participated in a comprehensive test battery that included the following LSP skills: (1) recognition of monosyllabic words at loud and soft levels, (2) repetition of phonemes and suprasegmental features from nonwords, and (3) recognition of key words from sentences presented within a noise background, and the following ISP skills: (1) discrimination of across-gender and within-gender (female) talkers and (2) identification and discrimination of emotional content from spoken sentences. A group of 30 age-matched children without hearing loss completed the nonword repetition, and talker- and emotion-perception tasks for comparison. Word-recognition scores decreased with signal level from a mean of 77% correct at 70 dB SPL to 52% at 50 dB SPL. On average, CI users recognized 50% of key words presented in sentences that were 9.8 dB above background noise. Phonetic properties were repeated from nonword stimuli at about the same level of accuracy as suprasegmental attributes (70 and 75%, respectively). The majority of CI users identified emotional content and differentiated talkers significantly above chance levels. Scores on LSP and ISP measures were combined into separate principal component scores and these components were highly correlated (r = 0.76). Both LSP and ISP component scores were higher for children who received a CI at the youngest ages, upgraded to more recent CI technology and had lower CI-aided thresholds. Higher scores, for both LSP and ISP components, were also associated with higher language levels and mainstreaming at younger ages. Higher ISP scores were associated with better social skills. Results strongly support a link between indexical and linguistic properties in perceptual analysis of speech. These two channels of information appear to be processed together in parallel by the auditory system and are inseparable in perception. Better speech performance, for both linguistic and indexical perception, is associated with younger age at implantation and use of more recent speech processor technology. Children with better speech perception demonstrated better spoken language, earlier academic mainstreaming, and placement in more typically sized classrooms (i.e., >20 students). Well-developed social skills were more highly associated with the ability to discriminate the nuances of talker identity and emotion than with the ability to recognize words and sentences through listening. The extent to which early cochlear implantation enabled these early-implanted children to make use of both linguistic and indexical properties of speech influenced not only their development of spoken language, but also their ability to function successfully in a hearing world.

  14. Interdependence of Linguistic and Indexical Speech Perception Skills in School-Aged Children with Early Cochlear Implantation

    PubMed Central

    Geers, Ann; Davidson, Lisa; Uchanski, Rosalie; Nicholas, Johanna

    2013-01-01

    Objectives This study documented the ability of experienced pediatric cochlear implant (CI) users to perceive linguistic properties (what is said) and indexical attributes (emotional intent and talker identity) of speech, and examined the extent to which linguistic (LSP) and indexical (ISP) perception skills are related. Pre-implant aided hearing, age at implantation, speech processor technology, CI-aided thresholds, sequential bilateral cochlear implantation, and academic integration with hearing age-mates were examined for their possible relationships to both LSP and ISP skills. Design Sixty 9–12 year olds, first implanted at an early age (12–38 months), participated in a comprehensive test battery that included the following LSP skills: 1) recognition of monosyllabic words at loud and soft levels, 2) repetition of phonemes and suprasegmental features from non-words, and 3) recognition of keywords from sentences presented within a noise background, and the following ISP skills: 1) discrimination of male from female and female from female talkers and 2) identification and discrimination of emotional content from spoken sentences. A group of 30 age-matched children without hearing loss completed the non-word repetition, and talker- and emotion-perception tasks for comparison. Results Word recognition scores decreased with signal level from a mean of 77% correct at 70 dB SPL to 52% at 50 dB SPL. On average, CI users recognized 50% of keywords presented in sentences that were 9.8 dB above background noise. Phonetic properties were repeated from non-word stimuli at about the same level of accuracy as suprasegmental attributes (70% and 75%, respectively). The majority of CI users identified emotional content and differentiated talkers significantly above chance levels. Scores on LSP and ISP measures were combined into separate principal component scores and these components were highly correlated (r = .76). Both LSP and ISP component scores were higher for children who received a CI at the youngest ages, upgraded to more recent CI technology and had lower CI-aided thresholds. Higher scores, for both LSP and ISP components, were also associated with higher language levels and mainstreaming at younger ages. Higher ISP scores were associated with better social skills. Conclusions Results strongly support a link between indexical and linguistic properties in perceptual analysis of speech. These two channels of information appear to be processed together in parallel by the auditory system and are inseparable in perception. Better speech performance, for both linguistic and indexical perception, is associated with younger age at implantation and use of more recent speech processor technology. Children with better speech perception demonstrated better spoken language, earlier academic mainstreaming, and placement in more typically-sized classrooms (i.e., >20 students). Well-developed social skills were more highly associated with the ability to discriminate the nuances of talker identity and emotion than with the ability to recognize words and sentences through listening. The extent to which early cochlear implantation enabled these early-implanted children to make use of both linguistic and indexical properties of speech influenced not only their development of spoken language, but also their ability to function successfully in a hearing world. PMID:23652814

  15. Development of speech perception and production in children with cochlear implants.

    PubMed

    Kishon-Rabin, Liat; Taitelbaum, Riki; Muchnik, Chava; Gehtler, Inbal; Kronenberg, Jona; Hildesheimer, Minka

    2002-05-01

    The purpose of the present study was twofold: 1) to compare the hierarchy of perceived and produced significant speech pattern contrasts in children with cochlear implants, and 2) to compare this hierarchy to developmental data of children with normal hearing. The subjects included 35 prelingual hearing-impaired children with multichannel cochlear implants. The test materials were the Hebrew Speech Pattern Contrast (HeSPAC) test and the Hebrew Picture Speech Pattern Contrast (HePiSPAC) test for older and younger children, respectively. The results show that 1) auditory speech perception performance of children with cochlear implants reaches an asymptote at 76% (after correction for guessing) between 4 and 6 years of implant use; 2) all implant users perceived vowel place extremely well immediately after implantation; 3) most implanted children perceived initial voicing at chance level until 2 to 3 years after implantation, after which scores improved by 60% to 70% with implant use; 4) the hierarchy of phonetic-feature production paralleled that of perception: vowels first, voicing last, and manner and place of articulation in between; and 5) the hierarchy in speech pattern contrast perception and production was similar between the implanted and the normal-hearing children, with the exception of the vowels (possibly because of the interaction between the specific information provided by the implant device and the acoustics of the Hebrew language). The data reported here contribute to our current knowledge about the development of phonological contrasts in children who were deprived of sound in the first few years of their lives and then developed phonetic representations via cochlear implants. The data also provide additional insight into the interrelated skills of speech perception and production.

  16. Musical training during early childhood enhances the neural encoding of speech in noise

    PubMed Central

    Strait, Dana L.; Parbery-Clark, Alexandra; Hittner, Emily; Kraus, Nina

    2012-01-01

    For children, learning often occurs in the presence of background noise. As such, there is growing desire to improve a child’s access to a target signal in noise. Given adult musicians’ perceptual and neural speech-in-noise enhancements, we asked whether similar effects are present in musically-trained children. We assessed the perception and subcortical processing of speech in noise and related cognitive abilities in musician and nonmusician children that were matched for a variety of overarching factors. Outcomes reveal that musicians’ advantages for processing speech in noise are present during pivotal developmental years. Supported by correlations between auditory working memory and attention and auditory brainstem response properties, we propose that musicians’ perceptual and neural enhancements are driven in a top-down manner by strengthened cognitive abilities with training. Our results may be considered by professionals involved in the remediation of language-based learning deficits, which are often characterized by poor speech perception in noise. PMID:23102977

  17. fMRI as a Preimplant Objective Tool to Predict Postimplant Oral Language Outcomes in Children with Cochlear Implants.

    PubMed

    Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K

    2016-01-01

    Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were <24 months at fMRI scanning and <36 months at first implantation. A silent background fMRI acquisition method was performed to acquire fMRI during auditory stimulation. A voxel-based analysis technique was utilized to generate z maps showing significant contrast in brain activation between auditory stimulation conditions (spoken narratives and narrow band noise). CELF-P2 and ESP were administered 2 years after implantation. Because most participants reached a ceiling on ESP, a voxel-wise regression analysis was performed between preimplant fMRI activation and postimplant CELF-P2 scores alone. Age at implantation and preimplant hearing thresholds were controlled in this regression analysis. Four brain regions were found to be significantly correlated with CELF-P2 scores. These clusters of positive correlation encompassed the temporo-parieto-occipital junction, areas in the prefrontal cortex and the cingulate gyrus. For the story versus silence contrast, CELF-P2 core language score demonstrated significant positive correlation with activation in the right angular gyrus (r = 0.95), left medial frontal gyrus (r = 0.94), and left cingulate gyrus (r = 0.96). For the narrow band noise versus silence contrast, the CELF-P2 core language score exhibited significant positive correlation with activation in the left angular gyrus (r = 0.89; for all clusters, corrected p < 0.05). Four brain regions related to language function and attention were identified that correlated with CELF-P2. Children with better oral language performance postimplant displayed greater activation in these regions preimplant. The results suggest that despite auditory deprivation, these regions are more receptive to gains in oral language development performance of children with hearing loss who receive early intervention via cochlear implantation. The present study suggests that oral language outcome following cochlear implant may be predicted by preimplant fMRI with auditory stimulation using natural speech.

  18. Children’s Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

    PubMed Central

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P.; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2016-01-01

    Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language. PMID:26834665

  19. Bayesian model of categorical effects in L1 and L2 speech perception

    NASA Astrophysics Data System (ADS)

    Kronrod, Yakov

    In this dissertation I present a model that captures categorical effects in both first language (L1) and second language (L2) speech perception. In L1 perception, categorical effects range between extremely strong for consonants to nearly continuous perception of vowels. I treat the problem of speech perception as a statistical inference problem and by quantifying categoricity I obtain a unified model of both strong and weak categorical effects. In this optimal inference mechanism, the listener uses their knowledge of categories and the acoustics of the signal to infer the intended productions of the speaker. The model splits up speech variability into meaningful category variance and perceptual noise variance. The ratio of these two variances, which I call Tau, directly correlates with the degree of categorical effects for a given phoneme or continuum. By fitting the model to behavioral data from different phonemes, I show how a single parametric quantitative variation can lead to the different degrees of categorical effects seen in perception experiments with different phonemes. In L2 perception, L1 categories have been shown to exert an effect on how L2 sounds are identified and how well the listener is able to discriminate them. Various models have been developed to relate the state of L1 categories with both the initial and eventual ability to process the L2. These models largely lacked a formalized metric to measure perceptual distance, a means of making a-priori predictions of behavior for a new contrast, and a way of describing non-discrete gradient effects. In the second part of my dissertation, I apply the same computational model that I used to unify L1 categorical effects to examining L2 perception. I show that we can use the model to make the same type of predictions as other SLA models, but also provide a quantitative framework while formalizing all measures of similarity and bias. Further, I show how using this model to consider L2 learners at different stages of development we can track specific parameters of categories as they change over time, giving us a look into the actual process of L2 category development.

  20. Native language shapes automatic neural processing of speech.

    PubMed

    Intartaglia, Bastien; White-Schwoch, Travis; Meunier, Christine; Roman, Stéphane; Kraus, Nina; Schön, Daniele

    2016-08-01

    The development of the phoneme inventory is driven by the acoustic-phonetic properties of one's native language. Neural representation of speech is known to be shaped by language experience, as indexed by cortical responses, and recent studies suggest that subcortical processing also exhibits this attunement to native language. However, most work to date has focused on the differences between tonal and non-tonal languages that use pitch variations to convey phonemic categories. The aim of this cross-language study is to determine whether subcortical encoding of speech sounds is sensitive to language experience by comparing native speakers of two non-tonal languages (French and English). We hypothesized that neural representations would be more robust and fine-grained for speech sounds that belong to the native phonemic inventory of the listener, and especially for the dimensions that are phonetically relevant to the listener such as high frequency components. We recorded neural responses of American English and French native speakers, listening to natural syllables of both languages. Results showed that, independently of the stimulus, American participants exhibited greater neural representation of the fundamental frequency compared to French participants, consistent with the importance of the fundamental frequency to convey stress patterns in English. Furthermore, participants showed more robust encoding and more precise spectral representations of the first formant when listening to the syllable of their native language as compared to non-native language. These results align with the hypothesis that language experience shapes sensory processing of speech and that this plasticity occurs as a function of what is meaningful to a listener. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Preschool teachers' perception and use of hearing assistive technology in educational settings.

    PubMed

    Nelson, Lauri H; Poole, Bridget; Muñoz, Karen

    2013-07-01

    This study explored how often sound-field amplification and personal frequency-modulated (FM) systems are used in preschool classrooms, teacher perceptions of advantages and disadvantages of using hearing assistive technology, and teacher recommendations for hearing assistive technology use. The study used a cross-sectional survey design. Participants were professionals who provided services to preschool-age children who are deaf or hard of hearing in public or private schools. A total of 306 surveys were sent to 162 deaf education programs throughout the United States; 99 surveys were returned (32%). Simple statistics were used to describe the quantitative survey results; content analysis was completed on open-ended survey comments. Surveys were received from teachers working at listening and spoken language preschool programs (65%) and at bilingual-bicultural and total communication preschool programs (35%). Most respondents perceived that hearing assistive technology improved students' academic performance, speech and language development, behavior, and attention in the classroom. The majority of respondents also reported that they definitely would or probably would recommend a sound-field system (77%) or personal FM system (71%) to other educators. Hearing assistive technology is frequently used in preschool classrooms of children who are deaf or hard of hearing, with generally positive teacher perceptions of the benefits of using such technology.

  2. Visual activity predicts auditory recovery from deafness after adult cochlear implantation.

    PubMed

    Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2013-12-01

    Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.

  3. [The Russian-language version of the matrix test (RUMatrix) in free field in patients after cochlear implantation in the long term].

    PubMed

    Goykhburg, M V; Bakhshinyan, V V; Petrova, I P; Wazybok, A; Kollmeier, B; Tavartkiladze, G A

    The deterioration of speech intelligibility in the patients using cochlear implantation (CI) systems is especially well apparent in the noisy environment. It explains why phrasal speech tests, such as a Matrix sentence test, have become increasingly more popular in the speech audiometry during rehabilitation after CI. The Matrix test allows to estimate speech perception by the patients in a real life situation. The objective of this study was to assess the effectiveness of audiological rehabilitation of CI patients using the Russian-language version of the matrix test (RUMatrix) in free field in the noisy environment. 33 patients aged from 5 to 40 years with a more than 3 year experience of using cochlear implants inserted at the National Research Center for Audiology and Hearing Rehabilitation were included in our study. Five of these patients were implanted bilaterally. The results of our study showed a statistically significant improvement of speech intelligibility in the noisy environment after the speech processor adjustment; dynamics of the signal-to-noise ratio changes was -1.7 dB (p<0.001). The RUMatrix test is a highly efficient method for the estimation of speech intelligibility in the patients undergoing clinical investigations in the noisy environment. The high degree of comparability of the RUMatrix test with the Matrix tests in other languages makes possible its application in international multicenter studies.

  4. Temporal relation between top-down and bottom-up processing in lexical tone perception

    PubMed Central

    Shuai, Lan; Gong, Tao

    2013-01-01

    Speech perception entails both top-down processing that relies primarily on language experience and bottom-up processing that depends mainly on instant auditory input. Previous models of speech perception often claim that bottom-up processing occurs in an early time window, whereas top-down processing takes place in a late time window after stimulus onset. In this paper, we evaluated the temporal relation of both types of processing in lexical tone perception. We conducted a series of event-related potential (ERP) experiments that recruited Mandarin participants and adopted three experimental paradigms, namely dichotic listening, lexical decision with phonological priming, and semantic violation. By systematically analyzing the lateralization patterns of the early and late ERP components that are observed in these experiments, we discovered that: auditory processing of pitch variations in tones, as a bottom-up effect, elicited greater right hemisphere activation; in contrast, linguistic processing of lexical tones, as a top-down effect, elicited greater left hemisphere activation. We also found that both types of processing co-occurred in both the early (around 200 ms) and late (around 300–500 ms) time windows, which supported a parallel model of lexical tone perception. Unlike the previous view that language processing is special and performed by dedicated neural circuitry, our study have elucidated that language processing can be decomposed into general cognitive functions (e.g., sensory and memory) and share neural resources with these functions. PMID:24723863

  5. Talker-specific learning in amnesia: Insight into mechanisms of adaptive speech perception.

    PubMed

    Trude, Alison M; Duff, Melissa C; Brown-Schmidt, Sarah

    2014-05-01

    A hallmark of human speech perception is the ability to comprehend speech quickly and effortlessly despite enormous variability across talkers. However, current theories of speech perception do not make specific claims about the memory mechanisms involved in this process. To examine whether declarative memory is necessary for talker-specific learning, we tested the ability of amnesic patients with severe declarative memory deficits to learn and distinguish the accents of two unfamiliar talkers by monitoring their eye-gaze as they followed spoken instructions. Analyses of the time-course of eye fixations showed that amnesic patients rapidly learned to distinguish these accents and tailored perceptual processes to the voice of each talker. These results demonstrate that declarative memory is not necessary for this ability and points to the involvement of non-declarative memory mechanisms. These results are consistent with findings that other social and accommodative behaviors are preserved in amnesia and contribute to our understanding of the interactions of multiple memory systems in the use and understanding of spoken language. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Effect of "developmental speech and language training through music" on speech production in children with autism spectrum disorders.

    PubMed

    Lim, Hayoung A

    2010-01-01

    The study compared the effect of music training, speech training and no-training on the verbal production of children with Autism Spectrum Disorders (ASD). Participants were 50 children with ASD, age range 3 to 5 years, who had previously been evaluated on standard tests of language and level of functioning. They were randomly assigned to one of three 3-day conditions. Participants in music training (n = 18) watched a music video containing 6 songs and pictures of the 36 target words; those in speech training (n = 18) watched a speech video containing 6 stories and pictures, and those in the control condition (n = 14) received no treatment. Participants' verbal production including semantics, phonology, pragmatics, and prosody was measured by an experimenter designed verbal production evaluation scale. Results showed that participants in both music and speech training significantly increased their pre to posttest verbal production. Results also indicated that both high and low functioning participants improved their speech production after receiving either music or speech training; however, low functioning participants showed a greater improvement after the music training than the speech training. Children with ASD perceive important linguistic information embedded in music stimuli organized by principles of pattern perception, and produce the functional speech.

  7. The influence of hearing aids on the speech and language development of children with hearing loss.

    PubMed

    Tomblin, J Bruce; Oleson, Jacob J; Ambrose, Sophie E; Walker, Elizabeth; Moeller, Mary Pat

    2014-05-01

    IMPORTANCE Hearing loss (HL) in children can be deleterious to their speech and language development. The standard of practice has been early provision of hearing aids (HAs) to moderate these effects; however, there have been few empirical studies evaluating the effectiveness of this practice on speech and language development among children with mild-to-severe HL. OBJECTIVE To investigate the contributions of aided hearing and duration of HA use to speech and language outcomes in children with mild-to-severe HL. DESIGN, SETTING, AND PARTICIPANTS An observational cross-sectional design was used to examine the association of aided hearing levels and length of HA use with levels of speech and language outcomes. One hundred eighty 3- and 5-year-old children with HL were recruited through records of Universal Newborn Hearing Screening and referrals from clinical service providers in the general community in 6 US states. INTERVENTIONS All but 4 children had been fitted with HAs, and measures of aided hearing and the duration of HA use were obtained. MAIN OUTCOMES AND MEASURES Standardized measures of speech and language ability were obtained. RESULTS Measures of the gain in hearing ability for speech provided by the HA were significantly correlated with levels of speech (ρ179 = 0.20; P = .008) and language: ρ155 = 0.21; P = .01) ability. These correlations were indicative of modest levels of association between aided hearing and speech and language outcomes. These benefits were found for children with mild and moderate-to-severe HL. In addition, the amount of benefit from aided hearing interacted with the duration of HA experience (Speech: F4,161 = 4.98; P < .001; Language: F4,138 = 2.91; P < .02). Longer duration of HA experience was most beneficial for children who had the best aided hearing. CONCLUSIONS AND RELEVANCE The degree of improved hearing provided by HAs was associated with better speech and language development in children. In addition, the duration of HA experience interacted with the aided hearing to influence outcomes. These results provide support for the provision of well-fitted HAs to children with HL. In particular, the findings support early HA fitting and HA provision to children with mild HL.

  8. Cultural and language differences in voice quality perception: a preliminary investigation using synthesized signals.

    PubMed

    Yiu, Edwin M-L; Murdoch, Bruce; Hird, Kathryn; Lau, Polly; Ho, Elaine Mandy

    2008-01-01

    Perceptual voice evaluation is a common clinical tool. However, to date, there is no consensus yet as to which common quality should be measured. Some available evidence shows that voice quality is a language-specific property which may be different across different languages. The familiarity of a language may affect the perception and reliability in rating voice quality. The present study set out to investigate the effects of listeners' cultural and language backgrounds on the perception of voice qualities. Forty speech pathology students from Australia and Hong Kong were asked to rate the breathy and rough qualities of synthesized voice signals in Cantonese and English. Results showed that the English stimulus sets as a whole were rated less severely than the Cantonese stimuli by both groups of listeners. In addition, the male Cantonese and English breathy stimuli were rated differently by the Australian and Hong Kong listeners. These results provided some evidence to support the claim that cultural and language backgrounds of the listeners would affect the perception for some voice quality types. Thus, the cultural and language backgrounds of judges should be taken into consideration in clinical voice evaluation. 2008 S. Karger AG, Basel.

  9. Electrophysiological and hemodynamic mismatch responses in rats listening to human speech syllables.

    PubMed

    Mahmoudzadeh, Mahdi; Dehaene-Lambertz, Ghislaine; Wallois, Fabrice

    2017-01-01

    Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG) and hemodynamic responses (using fNIRS) to series of human speech syllables and investigated the brain response to a change of consonant (ba vs. ga) and to a change of voice (male vs. female). Both methods revealed concordant results, although ECoG measures were more sensitive than fNIRS. Responses to syllables were bilateral, but with marked right-hemispheric lateralization. Responses to voice changes were observed with both methods, while only ECoG was sensitive to consonant changes. These results suggest that rats more effectively processed the speech envelope than fine temporal cues in contrast with human preterm neonates, in whom the opposite effects were observed. Cross-species comparisons constitute a very valuable tool to define the singularities of the human brain and species-specific bias that may help human infants to learn their native language.

  10. Auditory Processing in Specific Language Impairment (SLI): Relations With the Perception of Lexical and Phrasal Stress.

    PubMed

    Richards, Susan; Goswami, Usha

    2015-08-01

    We investigated whether impaired acoustic processing is a factor in developmental language disorders. The amplitude envelope of the speech signal is known to be important in language processing. We examined whether impaired perception of amplitude envelope rise time is related to impaired perception of lexical and phrasal stress in children with specific language impairment (SLI). Twenty-two children aged between 8 and 12 years participated in this study. Twelve had SLI; 10 were typically developing controls. All children completed psychoacoustic tasks measuring rise time, intensity, frequency, and duration discrimination. They also completed 2 linguistic stress tasks measuring lexical and phrasal stress perception. The SLI group scored significantly below the typically developing controls on both stress perception tasks. Performance on stress tasks correlated with individual differences in auditory sensitivity. Rise time and frequency thresholds accounted for the most unique variance. Digit Span also contributed to task success for the SLI group. The SLI group had difficulties with both acoustic and stress perception tasks. Our data suggest that poor sensitivity to amplitude rise time and sound frequency significantly contributes to the stress perception skills of children with SLI. Other cognitive factors such as phonological memory are also implicated.

  11. A Perceptual Phonetic Similarity Space for Languages: Evidence from Five Native Language Listener Groups1

    PubMed Central

    Bradlow, Ann; Clopper, Cynthia; Smiljanic, Rajka; Walter, Mary Ann

    2010-01-01

    The goal of the present study was to devise a means of representing languages in a perceptual similarity space based on their overall phonetic similarity. In Experiment 1, native English listeners performed a free classification task in which they grouped 17 diverse languages based on their perceived phonetic similarity. A similarity matrix of the grouping patterns was then submitted to clustering and multidimensional scaling analyses. In Experiment 2, an independent group of native English listeners sorted the group of 17 languages in terms of their distance from English. Experiment 3 repeated Experiment 2 with four groups of non-native English listeners: Dutch, Mandarin, Turkish and Korean listeners. Taken together, the results of these three experiments represent a step towards establishing an approach to assessing the overall phonetic similarity of languages. This approach could potentially provide the basis for developing predictions regarding foreign-accented speech intelligibility for various listener groups, and regarding speech perception accuracy in the context of background noise in various languages. PMID:21179563

  12. Tracking the Speech Signal--Time-Locked MEG Signals during Perception of Ultra-Fast and Moderately Fast Speech in Blind and in Sighted Listeners

    ERIC Educational Resources Information Center

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    Blind people can learn to understand speech at ultra-high syllable rates (ca. 20 syllables/s), a capability associated with hemodynamic activation of the central-visual system. To further elucidate the neural mechanisms underlying this skill, magnetoencephalographic (MEG) measurements during listening to sentence utterances were cross-correlated…

  13. Effects of stimulus duration and vowel quality in cross-linguistic categorical perception of pitch directions

    PubMed Central

    Zhu, Yiqing; Wayland, Ratree

    2017-01-01

    We investigated categorical perception of rising and falling pitch contours by tonal and non-tonal listeners. Specifically, we determined minimum durations needed to perceive both contours and compared to those of production, how stimuli duration affects their perception, whether there is an intrinsic F0 effect, and how first language background, duration, directions of pitch and vowel quality interact with each other. Continua of fundamental frequency on different vowels with 9 duration values were created for identification and discrimination tasks. Less time is generally needed to effectively perceive a pitch direction than to produce it. Overall, tonal listeners’ perception is more categorical than non-tonal listeners. Stimuli duration plays a critical role for both groups, but tonal listeners showed a stronger duration effect, and may benefit more from the extra time in longer stimuli for context-coding, consistent with the multistore model of categorical perception. Within a certain range of semitones, tonal listeners also required shorter stimulus duration to perceive pitch direction changes than non-tonal listeners. Finally, vowel quality plays a limited role and only interacts with duration in perceiving falling pitch directions. These findings further our understanding on models of categorical perception, the relationship between speech perception and production, and the interaction between the perception of tones and vowel quality. PMID:28671991

  14. Sensory-Cognitive Interaction in the Neural Encoding of Speech in Noise: A Review

    PubMed Central

    Anderson, Samira; Kraus, Nina

    2011-01-01

    Background Speech-in-noise (SIN) perception is one of the most complex tasks faced by listeners on a daily basis. Although listening in noise presents challenges for all listeners, background noise inordinately affects speech perception in older adults and in children with learning disabilities. Hearing thresholds are an important factor in SIN perception, but they are not the only factor. For successful comprehension, the listener must perceive and attend to relevant speech features, such as the pitch, timing, and timbre of the target speaker’s voice. Here, we review recent studies linking SIN and brainstem processing of speech sounds. Purpose To review recent work that has examined the ability of the auditory brainstem response to complex sounds (cABR), which reflects the nervous system’s transcription of pitch, timing, and timbre, to be used as an objective neural index for hearing-in-noise abilities. Study Sample We examined speech-evoked brainstem responses in a variety of populations, including children who are typically developing, children with language-based learning impairment, young adults, older adults, and auditory experts (i.e., musicians). Data Collection and Analysis In a number of studies, we recorded brainstem responses in quiet and babble noise conditions to the speech syllable /da/ in all age groups, as well as in a variable condition in children in which /da/ was presented in the context of seven other speech sounds. We also measured speech-in-noise perception using the Hearing-in-Noise Test (HINT) and the Quick Speech-in-Noise Test (QuickSIN). Results Children and adults with poor SIN perception have deficits in the subcortical spectrotemporal representation of speech, including low-frequency spectral magnitudes and the timing of transient response peaks. Furthermore, auditory expertise, as engendered by musical training, provides both behavioral and neural advantages for processing speech in noise. Conclusions These results have implications for future assessment and management strategies for young and old populations whose primary complaint is difficulty hearing in background noise. The cABR provides a clinically applicable metric for objective assessment of individuals with SIN deficits, for determination of the biologic nature of disorders affecting SIN perception, for evaluation of appropriate hearing aid algorithms, and for monitoring the efficacy of auditory remediation and training. PMID:21241645

  15. Mondegreens and Soramimi as a Method to Induce Misperceptions of Speech Content – Influence of Familiarity, Wittiness, and Language Competence

    PubMed Central

    Beck, Claudia; Kardatzki, Bernd; Ethofer, Thomas

    2014-01-01

    Expectations and prior knowledge can strongly influence our perception. In vision research, such top-down modulation of perceptual processing has been extensively studied using ambiguous stimuli, such as reversible figures. Here, we propose a novel method to address this issue in the auditory modality during speech perception by means of Mondgreens and Soramimi which represent song lyrics with the potential for misperception within one or across two languages, respectively. We demonstrate that such phenomena can be induced by visual presentation of the alternative percept and occur with a sufficient probability to exploit them in neuroscientific experiments. Song familiarity did not influence the occurrence of such altered perception indicating that this tool can be employed irrespective of the participants’ knowledge of music. On the other hand, previous knowledge of the alternative percept had a strong impact on the strength of altered perception which is in line with frequent reports that these phenomena can have long-lasting effects. Finally, we demonstrate that the strength of changes in perception correlated with the extent to which they were experienced as amusing as well as the vocabulary of the participants as source of potential interpretations. These findings suggest that such perceptional phenomena might be linked to the pleasant experience of resolving ambiguity which is in line with the long-existing theory of Hermann von Helmholtz that perception and problem-solving recruit similar processes. PMID:24416261

  16. Brainstem transcription of speech is disrupted in children with autism spectrum disorders

    PubMed Central

    Russo, Nicole; Nicol, Trent; Trommer, Barbara; Zecker, Steve; Kraus, Nina

    2009-01-01

    Language impairment is a hallmark of autism spectrum disorders (ASD). The origin of the deficit is poorly understood although deficiencies in auditory processing have been detected in both perception and cortical encoding of speech sounds. Little is known about the processing and transcription of speech sounds at earlier (brainstem) levels or about how background noise may impact this transcription process. Unlike cortical encoding of sounds, brainstem representation preserves stimulus features with a degree of fidelity that enables a direct link between acoustic components of the speech syllable (e.g., onsets) to specific aspects of neural encoding (e.g., waves V and A). We measured brainstem responses to the syllable /da/, in quiet and background noise, in children with and without ASD. Children with ASD exhibited deficits in both the neural synchrony (timing) and phase locking (frequency encoding) of speech sounds, despite normal click-evoked brainstem responses. They also exhibited reduced magnitude and fidelity of speech-evoked responses and inordinate degradation of responses by background noise in comparison to typically developing controls. Neural synchrony in noise was significantly related to measures of core and receptive language ability. These data support the idea that abnormalities in the brainstem processing of speech contribute to the language impairment in ASD. Because it is both passively-elicited and malleable, the speech-evoked brainstem response may serve as a clinical tool to assess auditory processing as well as the effects of auditory training in the ASD population. PMID:19635083

  17. The functional neuroanatomy of language

    NASA Astrophysics Data System (ADS)

    Hickok, Gregory

    2009-09-01

    There has been substantial progress over the last several years in understanding aspects of the functional neuroanatomy of language. Some of these advances are summarized in this review. It will be argued that recognizing speech sounds is carried out in the superior temporal lobe bilaterally, that the superior temporal sulcus bilaterally is involved in phonological-level aspects of this process, that the frontal/motor system is not central to speech recognition although it may modulate auditory perception of speech, that conceptual access mechanisms are likely located in the lateral posterior temporal lobe (middle and inferior temporal gyri), that speech production involves sensory-related systems in the posterior superior temporal lobe in the left hemisphere, that the interface between perceptual and motor systems is supported by a sensory-motor circuit for vocal tract actions (not dedicated to speech) that is very similar to sensory-motor circuits found in primate parietal lobe, and that verbal short-term memory can be understood as an emergent property of this sensory-motor circuit. These observations are considered within the context of a dual stream model of speech processing in which one pathway supports speech comprehension and the other supports sensory-motor integration. Additional topics of discussion include the functional organization of the planum temporale for spatial hearing and speech-related sensory-motor processes, the anatomical and functional basis of a form of acquired language disorder, conduction aphasia, the neural basis of vocabulary development, and sentence-level/grammatical processing.

  18. Perceptions of Parents and Speech and Language Therapists on the Effects of Paediatric Cochlear Implantation and Habilitation and Education Following It

    ERIC Educational Resources Information Center

    Huttunen, Kerttu; Valimaa, Taina

    2012-01-01

    Background: During the process of implantation, parents may have rather heterogeneous expectations and concerns about their child's development and the functioning of habilitation and education services. Their views on habilitation and education are important for building family-centred practices. Aims: We explored the perceptions of parents and…

  19. Evaluation of speech errors in Putonghua speakers with cleft palate: a critical review of methodology issues.

    PubMed

    Jiang, Chenghui; Whitehill, Tara L

    2014-04-01

    Speech errors associated with cleft palate are well established for English and several other Indo-European languages. Few articles describing the speech of Putonghua (standard Mandarin Chinese) speakers with cleft palate have been published in English language journals. Although methodological guidelines have been published for the perceptual speech evaluation of individuals with cleft palate, there has been no critical review of methodological issues in studies of Putonghua speakers with cleft palate. A literature search was conducted to identify relevant studies published over the past 30 years in Chinese language journals. Only studies incorporating perceptual analysis of speech were included. Thirty-seven articles which met inclusion criteria were analyzed and coded on a number of methodological variables. Reliability was established by having all variables recoded for all studies. This critical review identified many methodological issues. These design flaws make it difficult to draw reliable conclusions about characteristic speech errors in this group of speakers. Specific recommendations are made to improve the reliability and validity of future studies, as well to facilitate cross-center comparisons.

  20. Narrative Skills in Children with Selective Mutism: An Exploratory Study

    ERIC Educational Resources Information Center

    McInnes, Alison; Fung, Daniel; Manassis, Katharina; Fiksenbaum, Lisa; Tannock, Rosemary

    2004-01-01

    Selective mutism (SM) is a rare and complex disorder associated with anxiety symptoms and speech-language deficits; however, the nature of these language deficits has not been studied systematically. A novel cross-disciplinary assessment protocol was used to assess anxiety and nonverbal cognitive, receptive language, and expressive narrative…

  1. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates

    PubMed Central

    Petkov, Christopher I.; Jarvis, Erich D.

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species. PMID:22912615

  2. Temporal plasticity in auditory cortex improves neural discrimination of speech sounds

    PubMed Central

    Engineer, Crystal T.; Shetake, Jai A.; Engineer, Navzer D.; Vrana, Will A.; Wolf, Jordan T.; Kilgard, Michael P.

    2017-01-01

    Background Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. Objective/Hypothesis We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. Methods VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Results Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. Conclusion This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. PMID:28131520

  3. Cortical Plasticity after Cochlear Implantation

    PubMed Central

    Petersen, B.; Gjedde, A.; Wallentin, M.; Vuust, P.

    2013-01-01

    The most dramatic progress in the restoration of hearing takes place in the first months after cochlear implantation. To map the brain activity underlying this process, we used positron emission tomography at three time points: within 14 days, three months, and six months after switch-on. Fifteen recently implanted adult implant recipients listened to running speech or speech-like noise in four sequential PET sessions at each milestone. CI listeners with postlingual hearing loss showed differential activation of left superior temporal gyrus during speech and speech-like stimuli, unlike CI listeners with prelingual hearing loss. Furthermore, Broca's area was activated as an effect of time, but only in CI listeners with postlingual hearing loss. The study demonstrates that adaptation to the cochlear implant is highly related to the history of hearing loss. Speech processing in patients whose hearing loss occurred after the acquisition of language involves brain areas associated with speech comprehension, which is not the case for patients whose hearing loss occurred before the acquisition of language. Finally, the findings confirm the key role of Broca's area in restoration of speech perception, but only in individuals in whom Broca's area has been active prior to the loss of hearing. PMID:24377050

  4. Effect of Listeners' Linguistic Background on Perceptual Judgements of Hypernasality

    ERIC Educational Resources Information Center

    Lee, Alice; Brown, Susanna; Gibbon, Fiona E.

    2008-01-01

    Background: Many speech and language therapists work in a multilingual environment, making cross-linguistic studies of speech disorders clinically and theoretically important. Aims: To investigate the effect of listeners' linguistic background on their perceptual ratings of hypernasality and the reliability of the ratings. Methods &…

  5. Impaired extraction of speech rhythm from temporal modulation patterns in speech in developmental dyslexia

    PubMed Central

    Leong, Victoria; Goswami, Usha

    2014-01-01

    Dyslexia is associated with impaired neural representation of the sound structure of words (phonology). The “phonological deficit” in dyslexia may arise in part from impaired speech rhythm perception, thought to depend on neural oscillatory phase-locking to slow amplitude modulation (AM) patterns in the speech envelope. Speech contains AM patterns at multiple temporal rates, and these different AM rates are associated with phonological units of different grain sizes, e.g., related to stress, syllables or phonemes. Here, we assess the ability of adults with dyslexia to use speech AMs to identify rhythm patterns (RPs). We study 3 important temporal rates: “Stress” (~2 Hz), “Syllable” (~4 Hz) and “Sub-beat” (reduced syllables, ~14 Hz). 21 dyslexics and 21 controls listened to nursery rhyme sentences that had been tone-vocoded using either single AM rates from the speech envelope (Stress only, Syllable only, Sub-beat only) or pairs of AM rates (Stress + Syllable, Syllable + Sub-beat). They were asked to use the acoustic rhythm of the stimulus to identity the original nursery rhyme sentence. The data showed that dyslexics were significantly poorer at detecting rhythm compared to controls when they had to utilize multi-rate temporal information from pairs of AMs (Stress + Syllable or Syllable + Sub-beat). These data suggest that dyslexia is associated with a reduced ability to utilize AMs <20 Hz for rhythm recognition. This perceptual deficit in utilizing AM patterns in speech could be underpinned by less efficient neuronal phase alignment and cross-frequency neuronal oscillatory synchronization in dyslexia. Dyslexics' perceptual difficulties in capturing the full spectro-temporal complexity of speech over multiple timescales could contribute to the development of impaired phonological representations for words, the cognitive hallmark of dyslexia across languages. PMID:24605099

  6. Cross-Cultural Communication through Course Linkage: Utilizing Experiential Learning in Speech 110 (Introduction to Speech/Communication) & ESL 009 (Oral Skills).

    ERIC Educational Resources Information Center

    Mackler, Tobi; Savard, Theresa

    Taking advantage of the opportunity to heighten cultural awareness and create an intercultural exchange, this paper presents two articles that provide a summary of the rationale, methodology, and assignments used to teach the linked courses of an introductory speech communication course and an English-as-a-Second-Language Oral Skills course. The…

  7. Cognitive control and its impact on recovery from aphasic stroke

    PubMed Central

    Warren, Jane E.; Geranmayeh, Fatemeh; Woodhead, Zoe; Leech, Robert; Wise, Richard J. S.

    2014-01-01

    Aphasic deficits are usually only interpreted in terms of domain-specific language processes. However, effective human communication and tests that probe this complex cognitive skill are also dependent on domain-general processes. In the clinical context, it is a pragmatic observation that impaired attention and executive functions interfere with the rehabilitation of aphasia. One system that is important in cognitive control is the salience network, which includes dorsal anterior cingulate cortex and adjacent cortex in the superior frontal gyrus (midline frontal cortex). This functional imaging study assessed domain-general activity in the midline frontal cortex, which was remote from the infarct, in relation to performance on a standard test of spoken language in 16 chronic aphasic patients both before and after a rehabilitation programme. During scanning, participants heard simple sentences, with each listening trial followed immediately by a trial in which they repeated back the previous sentence. Listening to sentences in the context of a listen–repeat task was expected to activate regions involved in both language-specific processes (speech perception and comprehension, verbal working memory and pre-articulatory rehearsal) and a number of task-specific processes (including attention to utterances and attempts to overcome pre-response conflict and decision uncertainty during impaired speech perception). To visualize the same system in healthy participants, sentences were presented to them as three-channel noise-vocoded speech, thereby impairing speech perception and assessing whether this evokes domain general cognitive systems. As expected, contrasting the more difficult task of perceiving and preparing to repeat noise-vocoded speech with the same task on clear speech demonstrated increased activity in the midline frontal cortex in the healthy participants. The same region was activated in the aphasic patients as they listened to standard (undistorted) sentences. Using a region of interest defined from the data on the healthy participants, data from the midline frontal cortex was obtained from the patients. Across the group and across different scanning sessions, activity correlated significantly with the patients’ communicative abilities. This correlation was not influenced by the sizes of the lesion or the patients’ chronological ages. This is the first study that has directly correlated activity in a domain general system, specifically the salience network, with residual language performance in post-stroke aphasia. It provides direct evidence in support of the clinical intuition that domain-general cognitive control is an essential factor contributing to the potential for recovery from aphasic stroke. PMID:24163248

  8. Low-level neural auditory discrimination dysfunctions in specific language impairment-A review on mismatch negativity findings.

    PubMed

    Kujala, Teija; Leminen, Miika

    2017-12-01

    In specific language impairment (SLI), there is a delay in the child's oral language skills when compared with nonverbal cognitive abilities. The problems typically relate to phonological and morphological processing and word learning. This article reviews studies which have used mismatch negativity (MMN) in investigating low-level neural auditory dysfunctions in this disorder. With MMN, it is possible to tap the accuracy of neural sound discrimination and sensory memory functions. These studies have found smaller response amplitudes and longer latencies for speech and non-speech sound changes in children with SLI than in typically developing children, suggesting impaired and slow auditory discrimination in SLI. Furthermore, they suggest shortened sensory memory duration and vulnerability of the sensory memory to masking effects. Importantly, some studies reported associations between MMN parameters and language test measures. In addition, it was found that language intervention can influence the abnormal MMN in children with SLI, enhancing its amplitude. These results suggest that the MMN can shed light on the neural basis of various auditory and memory impairments in SLI, which are likely to influence speech perception. Copyright © 2017. Published by Elsevier Ltd.

  9. Neural pathways for visual speech perception

    PubMed Central

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  10. Perception of emotionally loaded vocal expressions and its connection to responses to music. A cross-cultural investigation: Estonia, Finland, Sweden, Russia, and the USA

    PubMed Central

    Waaramaa, Teija; Leisiö, Timo

    2013-01-01

    The present study focused on voice quality and the perception of the basic emotions from speech samples in cross-cultural conditions. It was examined whether voice quality, cultural, or language background, age, or gender were related to the identification of the emotions. Professional actors (n2) and actresses (n2) produced non-sense sentences (n32) and protracted vowels (n8) expressing the six basic emotions, interest, and a neutral emotional state. The impact of musical interests on the ability to distinguish between emotions or valence (on an axis positivity – neutrality – negativity) from voice samples was studied. Listening tests were conducted on location in five countries: Estonia, Finland, Russia, Sweden, and the USA with 50 randomly chosen participants (25 males and 25 females) in each country. The participants (total N = 250) completed a questionnaire eliciting their background information and musical interests. The responses in the listening test and the questionnaires were statistically analyzed. Voice quality parameters and the share of the emotions and valence identified correlated significantly with each other for both genders. The percentage of emotions and valence identified was clearly above the chance level in each of the five countries studied, however, the countries differed significantly from each other for the identified emotions and the gender of the speaker. The samples produced by females were identified significantly better than those produced by males. Listener's age was a significant variable. Only minor gender differences were found for the identification. Perceptual confusion in the listening test between emotions seemed to be dependent on their similar voice production types. Musical interests tended to have a positive effect on the identification of the emotions. The results also suggest that identifying emotions from speech samples may be easier for those listeners who share a similar language or cultural background with the speaker. PMID:23801972

  11. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users

    PubMed Central

    Fuller, Christina D.; Galvin, John J.; Maat, Bert; Başkent, Deniz; Free, Rolien H.

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users. PMID:29621947

  12. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users.

    PubMed

    Fuller, Christina D; Galvin, John J; Maat, Bert; Başkent, Deniz; Free, Rolien H

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users.

  13. Perception of English Intonation by English, Spanish, and Chinese Listeners

    ERIC Educational Resources Information Center

    Grabe, Esther; Rosner, Burton S.; Garcia-Albea, Jose E.; Zhou, Xiaolin

    2003-01-01

    Native language affects the perception of segmental phonetic structure, of stress, and of semantic and pragmatic effects of intonation. Similarly, native language might influence the perception of similarities and differences among intonation contours. To test this hypothesis, a cross-language experiment was conducted. An English utterance was…

  14. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    PubMed

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  15. Human voice perception.

    PubMed

    Latinus, Marianne; Belin, Pascal

    2011-02-22

    We are all voice experts. First and foremost, we can produce and understand speech, and this makes us a unique species. But in addition to speech perception, we routinely extract from voices a wealth of socially-relevant information in what constitutes a more primitive, and probably more universal, non-linguistic mode of communication. Consider the following example: you are sitting in a plane, and you can hear a conversation in a foreign language in the row behind you. You do not see the speakers' faces, and you cannot understand the speech content because you do not know the language. Yet, an amazing amount of information is available to you. You can evaluate the physical characteristics of the different protagonists, including their gender, approximate age and size, and associate an identity to the different voices. You can form a good idea of the different speaker's mood and affective state, as well as more subtle cues as the perceived attractiveness or dominance of the protagonists. In brief, you can form a fairly detailed picture of the type of social interaction unfolding, which a brief glance backwards can on the occasion help refine - sometimes surprisingly so. What are the acoustical cues that carry these different types of vocal information? How does our brain process and analyse this information? Here we briefly review an emerging field and the main tools used in voice perception research. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. The Temporal Prediction of Stress in Speech and Its Relation to Musical Beat Perception.

    PubMed

    Beier, Eleonora J; Ferreira, Fernanda

    2018-01-01

    While rhythmic expectancies are thought to be at the base of beat perception in music, the extent to which stress patterns in speech are similarly represented and predicted during on-line language comprehension is debated. The temporal prediction of stress may be advantageous to speech processing, as stress patterns aid segmentation and mark new information in utterances. However, while linguistic stress patterns may be organized into hierarchical metrical structures similarly to musical meter, they do not typically present the same degree of periodicity. We review the theoretical background for the idea that stress patterns are predicted and address the following questions: First, what is the evidence that listeners can predict the temporal location of stress based on preceding rhythm? If they can, is it thanks to neural entrainment mechanisms similar to those utilized for musical beat perception? And lastly, what linguistic factors other than rhythm may account for the prediction of stress in natural speech? We conclude that while expectancies based on the periodic presentation of stresses are at play in some of the current literature, other processes are likely to affect the prediction of stress in more naturalistic, less isochronous speech. Specifically, aspects of prosody other than amplitude changes (e.g., intonation) as well as lexical, syntactic and information structural constraints on the realization of stress may all contribute to the probabilistic expectation of stress in speech.

  17. Cross-Modal Facilitation in Speech Prosody

    ERIC Educational Resources Information Center

    Foxton, Jessica M.; Riviere, Louis-David; Barone, Pascal

    2010-01-01

    Speech prosody has traditionally been considered solely in terms of its auditory features, yet correlated visual features exist, such as head and eyebrow movements. This study investigated the extent to which visual prosodic features are able to affect the perception of the auditory features. Participants were presented with videos of a speaker…

  18. The Perceptions of Students in the Allied Health Professions towards Stroke Rehabilitation Teams and the SLP's Role

    ERIC Educational Resources Information Center

    Insalaco, Deborah; Ozkurt, Elcin; Santiago, Dign

    2007-01-01

    The purpose of this study was to determine the perceptions and knowledge of final-year speech-language pathology (SLP), physical and occupational therapy (PT, OT) students toward stroke rehabilitation teams and the SLPs' roles on them. The investigators adapted a survey developed by (Felsher & Ross, 1994) and administered it to 35 PT, 35 OT, and…

  19. [Test set for the evaluation of hearing and speech development after cochlear implantation in children].

    PubMed

    Lamprecht-Dinnesen, A; Sick, U; Sandrieser, P; Illg, A; Lesinski-Schiedat, A; Döring, W H; Müller-Deile, J; Kiefer, J; Matthias, K; Wüst, A; Konradi, E; Riebandt, M; Matulat, P; Von Der Haar-Heise, S; Swart, J; Elixmann, K; Neumann, K; Hildmann, A; Coninx, F; Meyer, V; Gross, M; Kruse, E; Lenarz, T

    2002-10-01

    Since autumn 1998 the multicenter interdisciplinary study group "Test Materials for CI Children" has been compiling a uniform examination tool for evaluation of speech and hearing development after cochlear implantation in childhood. After studying the relevant literature, suitable materials were checked for practical applicability, modified and provided with criteria for execution and break-off. For data acquisition, observation forms for preparation of a PC-version were developed. The evaluation set contains forms for master data with supplements relating to postoperative processes. The hearing tests check supra-threshold hearing with loudness scaling for children, speech comprehension in silence (Mainz and Göttingen Test for Speech Comprehension in Childhood) and phonemic differentiation (Oldenburg Rhyme Test for Children), the central auditory processes of detection, discrimination, identification and recognition (modification of the "Frankfurt Functional Hearing Test for Children") and audiovisual speech perception (Open Paragraph Tracking, Kiel Speech Track Program). The materials for speech and language development comprise phonetics-phonology, lexicon and semantics (LOGO Pronunciation Test), syntax and morphology (analysis of spontaneous speech), language comprehension (Reynell Scales), communication and pragmatics (observation forms). The MAIS and MUSS modified questionnaires are integrated. The evaluation set serves quality assurance and permits factor analysis as well as controls for regularity through the multicenter comparison of long-term developmental trends after cochlear implantation.

  20. A novel speech-processing strategy incorporating tonal information for cochlear implants.

    PubMed

    Lan, N; Nie, K B; Gao, S K; Zeng, F G

    2004-05-01

    Good performance in cochlear implant users depends in large part on the ability of a speech processor to effectively decompose speech signals into multiple channels of narrow-band electrical pulses for stimulation of the auditory nerve. Speech processors that extract only envelopes of the narrow-band signals (e.g., the continuous interleaved sampling (CIS) processor) may not provide sufficient information to encode the tonal cues in languages such as Chinese. To improve the performance in cochlear implant users who speak tonal language, we proposed and developed a novel speech-processing strategy, which extracted both the envelopes of the narrow-band signals and the fundamental frequency (F0) of the speech signal, and used them to modulate both the amplitude and the frequency of the electrical pulses delivered to stimulation electrodes. We developed an algorithm to extract the fundatmental frequency and identified the general patterns of pitch variations of four typical tones in Chinese speech. The effectiveness of the extraction algorithm was verified with an artificial neural network that recognized the tonal patterns from the extracted F0 information. We then compared the novel strategy with the envelope-extraction CIS strategy in human subjects with normal hearing. The novel strategy produced significant improvement in perception of Chinese tones, phrases, and sentences. This novel processor with dynamic modulation of both frequency and amplitude is encouraging for the design of a cochlear implant device for sensorineurally deaf patients who speak tonal languages.

  1. Speech acquisition predicts regions of enhanced cortical response to auditory stimulation in autism spectrum individuals.

    PubMed

    Samson, F; Zeffiro, T A; Doyon, J; Benali, H; Mottron, L

    2015-09-01

    A continuum of phenotypes makes up the autism spectrum (AS). In particular, individuals show large differences in language acquisition, ranging from precocious speech to severe speech onset delay. However, the neurological origin of this heterogeneity remains unknown. Here, we sought to determine whether AS individuals differing in speech acquisition show different cortical responses to auditory stimulation and morphometric brain differences. Whole-brain activity following exposure to non-social sounds was investigated. Individuals in the AS were classified according to the presence or absence of Speech Onset Delay (AS-SOD and AS-NoSOD, respectively) and were compared with IQ-matched typically developing individuals (TYP). AS-NoSOD participants displayed greater task-related activity than TYP in the inferior frontal gyrus and peri-auditory middle and superior temporal gyri, which are associated with language processing. Conversely, the AS-SOD group only showed enhanced activity in the vicinity of the auditory cortex. We detected no differences in brain structure between groups. This is the first study to demonstrate the existence of differences in functional brain activity between AS individuals divided according to their pattern of speech development. These findings support the Trigger-threshold-target model and indicate that the occurrence of speech onset delay in AS individuals depends on the location of cortical functional reallocation, which favors perception in AS-SOD and language in AS-NoSOD. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing

    PubMed Central

    Gow, David W.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237

  3. Children with Auditory Neuropathy Spectrum Disorder Fitted with Hearing Aids Applying the American Academy of Audiology Pediatric Amplification Guideline: Current Practice and Outcomes.

    PubMed

    Walker, Elizabeth; McCreery, Ryan; Spratford, Meredith; Roush, Patricia

    2016-03-01

    Up to 15% of children with permanent hearing loss (HL) have auditory neuropathy spectrum disorder (ANSD), which involves normal outer hair cell function and disordered afferent neural activity in the auditory nerve or brainstem. Given the varying presentations of ANSD in children, there is a need for more evidence-based research on appropriate clinical interventions for this population. This study compared the speech production, speech perception, and language outcomes of children with ANSD, who are hard of hearing, to children with similar degrees of mild-to-moderately severe sensorineural hearing loss (SNHL), all of whom were fitted with bilateral hearing aids (HAs) based on the American Academy of Audiology pediatric amplification guidelines. Speech perception and communication outcomes data were gathered in a prospective accelerated longitudinal design, with entry into the study between six mo and seven yr of age. Three sites were involved in participant recruitment: Boys Town National Research Hospital, the University of North Carolina at Chapel Hill, and the University of Iowa. The sample consisted of 12 children with ANSD and 22 children with SNHL. The groups were matched based on better-ear pure-tone average, better-ear aided speech intelligibility index, gender, maternal education level, and newborn hearing screening result (i.e., pass or refer). Children and their families participated in an initial baseline visit, followed by visits twice a year for children <2 yr of age and once a yr for children >2 yr of age. Paired-sample t-tests were used to compare children with ANSD to children with SNHL. Paired t-tests indicated no significant differences between the ANSD and SNHL groups on language and articulation measures. Children with ANSD displayed functional speech perception skills in quiet. Although the number of participants was too small to conduct statistical analyses for speech perception testing, there appeared to be a trend in which the ANSD group performed more poorly in background noise with HAs, compared to the SNHL group. The American Academy of Audiology Pediatric Amplification Guidelines recommend that children with ANSD receive an HA trial if their behavioral thresholds are sufficiently high enough to impede speech perception at conversational levels. For children with ANSD in the mild-to-severe HL range, the current results support this recommendation, as children with ANSD can achieve functional outcomes similar to peers with SNHL. American Academy of Audiology.

  4. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network.

    PubMed

    Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus

    2017-01-01

    Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.

  5. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

    PubMed Central

    Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus

    2017-01-01

    Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension. PMID:29249945

  6. An Investigation of Lexical Progress of Teaching Foreign Languages (TFL) Learners in Terms of Part-of-Speech

    ERIC Educational Resources Information Center

    Çerçi, Arif

    2017-01-01

    In the related literature, it has been discussed that issues related to foreign language lexicon have been ignored; therefore, a solid theory of foreign language lexicon has not been constructed yet. In the framework of Turkish as a foreign language, the literature lacks both cross-sectional and longitudinal studies. To this end, this longitudinal…

  7. Exploring the development of cultural awareness amongst post-graduate speech-language pathology students.

    PubMed

    Howells, Simone; Barton, Georgina; Westerveld, Marleen

    2016-06-01

    Speech-language pathology programs globally need to prepare graduates to work with culturally and linguistically diverse populations. This study explored the knowledge, perceptions and experiences related to development of cultural awareness of graduate-entry Master of Speech Pathology students at an Australian university. Sixty students across both year-levels completed a cultural awareness survey at the beginning of the semester. To explore how clinical placement influenced students' knowledge and perceptions, year-2 students completed written reflections pre- and post-placement (n = 7) and participated in focus groups post-placement (n = 6). Survey results showed student interest in working with culturally and linguistically diverse populations was high (over 80%) and confidence was moderate (over 50%). More than 80% of students reported awareness of their own cultural identities, stereotypes and prejudices. Content analysis of focus group and written reflection data identified key concepts comprising of: (1) context-university, and clinical placement site; (2) competencies-professional and individual; and (3) cultural implications-clients' and students' cultural backgrounds. Findings suggest clinical placement may positively influence cultural awareness development and students' own cultural backgrounds may influence this more. Further exploration of how students move along a continuum of cultural development is warranted.

  8. The Influence of Hearing Aid Use on Outcomes of Children With Mild Hearing Loss.

    PubMed

    Walker, Elizabeth A; Holte, Lenore; McCreery, Ryan W; Spratford, Meredith; Page, Thomas; Moeller, Mary Pat

    2015-10-01

    This study examined the effects of consistent hearing aid (HA) use on outcomes in children with mild hearing loss (HL). Five- or 7-year-old children with mild HL were separated into 3 groups on the basis of patterns of daily HA use. Using analyses of variance, we compared outcomes between groups on speech and language tests and a speech perception in noise task. Regression models were used to investigate the influence of cumulative auditory experience (audibility, early intervention, HA use) on outcomes. Full-time HA users demonstrated significantly higher scores on vocabulary and grammar measures compared with nonusers. There were no significant differences between the 3 groups on articulation or speech perception measures. After controlling for the variance in age at confirmation of HL, level of audibility, and enrollment in early intervention, only amount of daily HA use was a significant predictor of grammar and vocabulary. The current results provide evidence that children's language development benefits from consistent HA use. Nonusers are at risk in areas such as vocabulary and grammar compared with other children with mild HL who wear HAs regularly. Service providers should work collaboratively to encourage consistent HA use.

  9. Perception of Filtered Speech by Children with Developmental Dyslexia and Children with Specific Language Impairments

    PubMed Central

    Goswami, Usha; Cumming, Ruth; Chait, Maria; Huss, Martina; Mead, Natasha; Wilson, Angela M.; Barnes, Lisa; Fosker, Tim

    2016-01-01

    Here we use two filtered speech tasks to investigate children’s processing of slow (<4 Hz) versus faster (∼33 Hz) temporal modulations in speech. We compare groups of children with either developmental dyslexia (Experiment 1) or speech and language impairments (SLIs, Experiment 2) to groups of typically-developing (TD) children age-matched to each disorder group. Ten nursery rhymes were filtered so that their modulation frequencies were either low-pass filtered (<4 Hz) or band-pass filtered (22 – 40 Hz). Recognition of the filtered nursery rhymes was tested in a picture recognition multiple choice paradigm. Children with dyslexia aged 10 years showed equivalent recognition overall to TD controls for both the low-pass and band-pass filtered stimuli, but showed significantly impaired acoustic learning during the experiment from low-pass filtered targets. Children with oral SLIs aged 9 years showed significantly poorer recognition of band pass filtered targets compared to their TD controls, and showed comparable acoustic learning effects to TD children during the experiment. The SLI samples were also divided into children with and without phonological difficulties. The children with both SLI and phonological difficulties were impaired in recognizing both kinds of filtered speech. These data are suggestive of impaired temporal sampling of the speech signal at different modulation rates by children with different kinds of developmental language disorder. Both SLI and dyslexic samples showed impaired discrimination of amplitude rise times. Implications of these findings for a temporal sampling framework for understanding developmental language disorders are discussed. PMID:27303348

  10. Amusia results in abnormal brain activity following inappropriate intonation during speech comprehension.

    PubMed

    Jiang, Cunmei; Hamm, Jeff P; Lim, Vanessa K; Kirk, Ian J; Chen, Xuhai; Yang, Yufang

    2012-01-01

    Pitch processing is a critical ability on which humans' tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources.

  11. Amusia Results in Abnormal Brain Activity following Inappropriate Intonation during Speech Comprehension

    PubMed Central

    Jiang, Cunmei; Hamm, Jeff P.; Lim, Vanessa K.; Kirk, Ian J.; Chen, Xuhai; Yang, Yufang

    2012-01-01

    Pitch processing is a critical ability on which humans’ tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources. PMID:22859982

  12. A white matter tract mediating awareness of speech.

    PubMed

    Koubeissi, Mohamad Z; Fernandez-Baca Vaca, Guadalupe; Maciunas, Robert; Stephani, Caspar

    2016-01-12

    To investigate the effects of extraoperative electrical stimulation of fiber tracts connecting the language territories. We describe results of extraoperative electrical stimulation of stereotactic electrodes in 3 patients with epilepsy who underwent presurgical evaluation for epilepsy surgery. Contacts of these electrodes sampled, among other structures, the suprainsular white matter of the left hemisphere. Aside from speech disturbance and speech arrest, subcortical electrical stimulation of white matter tracts directly superior to the insula representing the anterior part of the arcuate fascicle, reproducibly induced complex verbal auditory phenomena including (1) hearing one's own voice in the absence of overt speech, and (2) lack of perception of arrest or alteration in ongoing repetition of words. These results represent direct evidence that the anterior part of the arcuate fascicle is part of a network that is important in the mediation of speech planning and awareness likely by linking the language areas of the inferior parietal and posterior inferior frontal cortices. More specifically, our observations suggest that this structure may be relevant to the pathophysiology of thought disorders and auditory verbal hallucinations. © 2015 American Academy of Neurology.

  13. Executives' speech expressiveness: analysis of perceptive and acoustic aspects of vocal dynamics.

    PubMed

    Marquezin, Daniela Maria Santos Serrano; Viola, Izabel; Ghirardi, Ana Carolina de Assis Moura; Madureira, Sandra; Ferreira, Léslie Piccolotto

    2015-01-01

    To analyze speech expressiveness in a group of executives based on perceptive and acoustic aspects of vocal dynamics. Four male subjects participated in the research study (S1, S2, S3, and S4). The assessments included the Kingdomality test to obtain the keywords of communicative attitudes; perceptive-auditory assessment to characterize vocal quality and dynamics, performed by three judges who are speech language pathologists; perceptiveauditory assessment to judge the chosen keywords; speech acoustics to assess prosodic elements (Praat software); and a statistical analysis. According to the perceptive-auditory analysis of vocal dynamics, S1, S2, S3, and S4 did not show vocal alterations and all of them were considered with lowered habitual pitch. S1: pointed out as insecure, nonobjective, nonempathetic, and unconvincing with inappropriate use of pauses that are mainly formed by hesitations; inadequate separation of prosodic groups with breaking of syntagmatic constituents. S2: regular use of pauses for respiratory reload, organization of sentences, and emphasis, which is considered secure, little objective, empathetic, and convincing. S3: pointed out as secure, objective, empathetic, and convincing with regular use of pauses for respiratory reload and organization of sentences and hesitations. S4: the most secure, objective, empathetic, and convincing, with proper use of pauses for respiratory reload, planning, and emphasis; prosodic groups agreed with the statement, without separating the syntagmatic constituents. The speech characteristics and communicative attitudes were highlighted in two subjects in a different manner, in such a way that the slow rate of speech and breaks of the prosodic groups transmitted insecurity, little objectivity, and nonpersuasion.

  14. Transfer of Training between Music and Speech: Common Processing, Attention, and Memory.

    PubMed

    Besson, Mireille; Chobert, Julie; Marie, Céline

    2011-01-01

    After a brief historical perspective of the relationship between language and music, we review our work on transfer of training from music to speech that aimed at testing the general hypothesis that musicians should be more sensitive than non-musicians to speech sounds. In light of recent results in the literature, we argue that when long-term experience in one domain influences acoustic processing in the other domain, results can be interpreted as common acoustic processing. But when long-term experience in one domain influences the building-up of abstract and specific percepts in another domain, results are taken as evidence for transfer of training effects. Moreover, we also discuss the influence of attention and working memory on transfer effects and we highlight the usefulness of the event-related potentials method to disentangle the different processes that unfold in the course of music and speech perception. Finally, we give an overview of an on-going longitudinal project with children aimed at testing transfer effects from music to different levels and aspects of speech processing.

  15. Transfer of Training between Music and Speech: Common Processing, Attention, and Memory

    PubMed Central

    Besson, Mireille; Chobert, Julie; Marie, Céline

    2011-01-01

    After a brief historical perspective of the relationship between language and music, we review our work on transfer of training from music to speech that aimed at testing the general hypothesis that musicians should be more sensitive than non-musicians to speech sounds. In light of recent results in the literature, we argue that when long-term experience in one domain influences acoustic processing in the other domain, results can be interpreted as common acoustic processing. But when long-term experience in one domain influences the building-up of abstract and specific percepts in another domain, results are taken as evidence for transfer of training effects. Moreover, we also discuss the influence of attention and working memory on transfer effects and we highlight the usefulness of the event-related potentials method to disentangle the different processes that unfold in the course of music and speech perception. Finally, we give an overview of an on-going longitudinal project with children aimed at testing transfer effects from music to different levels and aspects of speech processing. PMID:21738519

  16. Stop consonant voicing in young children's speech: Evidence from a cross-sectional study

    NASA Astrophysics Data System (ADS)

    Ganser, Emily

    There are intuitive reasons to believe that speech-sound acquisition and language acquisition should be related in development. Surprisingly, only recently has research begun to parse just how the two might be related. This study investigated possible correlations between speech-sound acquisition and language acquisition, as part of a large-scale, longitudinal study of the relationship between different types of phonological development and vocabulary growth in the preschool years. Productions of voiced and voiceless stop-initial words were recorded from 96 children aged 28-39 months. Voice Onset Time (VOT, in ms) for each token context was calculated. A mixed-model logistic regression was calculated which predicted whether the sound was intended to be voiced or voiceless based on its VOT. This model estimated the slopes of the logistic function for each child. This slope was referred to as Robustness of Contrast (based on Holliday, Reidy, Beckman, and Edwards, 2015), defined as being the degree of categorical differentiation between the production of two speech sounds or classes of sounds, in this case, voiced and voiceless stops. Results showed a wide range of slopes for individual children, suggesting that slope-derived Robustness of Contrast could be a viable means of measuring a child's acquisition of the voicing contrast. Robustness of Contrast was then compared to traditional measures of speech and language skills to investigate whether there was any correlation between the production of stop voicing and broader measures of speech and language development. The Robustness of Contrast measure was found to correlate with all individual measures of speech and language, suggesting that it might indeed be predictive of later language skills.

  17. Acoustic Sources of Accent in Second Language Japanese Speech.

    PubMed

    Idemaru, Kaori; Wei, Peipei; Gubbins, Lucy

    2018-05-01

    This study reports an exploratory analysis of the acoustic characteristics of second language (L2) speech which give rise to the perception of a foreign accent. Japanese speech samples were collected from American English and Mandarin Chinese speakers ( n = 16 in each group) studying Japanese. The L2 participants and native speakers ( n = 10) provided speech samples modeling after six short sentences. Segmental (vowels and stops) and prosodic features (rhythm, tone, and fluency) were examined. Native Japanese listeners ( n = 10) rated the samples with regard to degrees of foreign accent. The analyses predicting accent ratings based on the acoustic measurements indicated that one of the prosodic features in particular, tone (defined as high and low patterns of pitch accent and intonation in this study), plays an important role in robustly predicting accent rating in L2 Japanese across the two first language (L1) backgrounds. These results were consistent with the prediction based on phonological and phonetic comparisons between Japanese and English, as well as Japanese and Mandarin Chinese. The results also revealed L1-specific predictors of perceived accent in Japanese. The findings of this study contribute to the growing literature that examines sources of perceived foreign accent.

  18. Factors Associated with Speech-Sound Stimulability.

    ERIC Educational Resources Information Center

    Lof, Gregory L.

    1996-01-01

    This study examined stimulability in 30 children (ages 3 to 5) with articulation impairments. Factors found to relate to stimulability were articulation visibility, the child's age, the family's socioeconomic status, and the child's overall imitative ability. Perception, severity, otitis media history, language abilities, consistency of…

  19. Foreign Languages Sound Fast: Evidence from Implicit Rate Normalization.

    PubMed

    Bosker, Hans Rutger; Reinisch, Eva

    2017-01-01

    Anecdotal evidence suggests that unfamiliar languages sound faster than one's native language. Empirical evidence for this impression has, so far, come from explicit rate judgments. The aim of the present study was to test whether such perceived rate differences between native and foreign languages (FLs) have effects on implicit speech processing. Our measure of implicit rate perception was "normalization for speech rate": an ambiguous vowel between short /a/ and long /a:/ is interpreted as /a:/ following a fast but as /a/ following a slow carrier sentence. That is, listeners did not judge speech rate itself; instead, they categorized ambiguous vowels whose perception was implicitly affected by the rate of the context. We asked whether a bias towards long /a:/ might be observed when the context is not actually faster but simply spoken in a FL. A fully symmetrical experimental design was used: Dutch and German participants listened to rate matched (fast and slow) sentences in both languages spoken by the same bilingual speaker. Sentences were followed by non-words that contained vowels from an /a-a:/ duration continuum. Results from Experiments 1 and 2 showed a consistent effect of rate normalization for both listener groups. Moreover, for German listeners, across the two experiments, foreign sentences triggered more /a:/ responses than (rate matched) native sentences, suggesting that foreign sentences were indeed perceived as faster. Moreover, this FL effect was modulated by participants' ability to understand the FL: those participants that scored higher on a FL translation task showed less of a FL effect. However, opposite effects were found for the Dutch listeners. For them, their native rather than the FL induced more /a:/ responses. Nevertheless, this reversed effect could be reduced when additional spectral properties of the context were controlled for. Experiment 3, using explicit rate judgments, replicated the effect for German but not Dutch listeners. We therefore conclude that the subjective impression that FLs sound fast may have an effect on implicit speech processing, with implications for how language learners perceive spoken segments in a FL.

  20. Assessment of the Speech Intelligibility Performance of Post Lingual Cochlear Implant Users at Different Signal-to-Noise Ratios Using the Turkish Matrix Test.

    PubMed

    Polat, Zahra; Bulut, Erdoğan; Ataş, Ahmet

    2016-09-01

    Spoken word recognition and speech perception tests in quiet are being used as a routine in assessment of the benefit which children and adult cochlear implant users receive from their devices. Cochlear implant users generally demonstrate high level performances in these test materials as they are able to achieve high level speech perception ability in quiet situations. Although these test materials provide valuable information regarding Cochlear Implant (CI) users' performances in optimal listening conditions, they do not give realistic information regarding performances in adverse listening conditions, which is the case in the everyday environment. The aim of this study was to assess the speech intelligibility performance of post lingual CI users in the presence of noise at different signal-to-noise ratio with the Matrix Test developed for Turkish language. Cross-sectional study. The thirty post lingual implant user adult subjects, who had been using implants for a minimum of one year, were evaluated with Turkish Matrix test. Subjects' speech intelligibility was measured using the adaptive and non-adaptive Matrix Test in quiet and noisy environments. The results of the study show a correlation between Pure Tone Average (PTA) values of the subjects and Matrix test Speech Reception Threshold (SRT) values in the quiet. Hence, it is possible to asses PTA values of CI users using the Matrix Test also. However, no correlations were found between Matrix SRT values in the quiet and Matrix SRT values in noise. Similarly, the correlation between PTA values and intelligibility scores in noise was also not significant. Therefore, it may not be possible to assess the intelligibility performance of CI users using test batteries performed in quiet conditions. The Matrix Test can be used to assess the benefit of CI users from their systems in everyday life, since it is possible to perform intelligibility test with the Matrix test using a material that CI users experience in their everyday life and it is possible to assess their difficulty in speech discrimination in noisy conditions they have to cope with.

  1. The Mechanism of Speech Processing in Congenital Amusia: Evidence from Mandarin Speakers

    PubMed Central

    Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren

    2012-01-01

    Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results. PMID:22347374

  2. The mechanism of speech processing in congenital amusia: evidence from Mandarin speakers.

    PubMed

    Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren

    2012-01-01

    Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results.

  3. When speaker identity is unavoidable: Neural processing of speaker identity cues in natural speech.

    PubMed

    Tuninetti, Alba; Chládková, Kateřina; Peter, Varghese; Schiller, Niels O; Escudero, Paola

    2017-11-01

    Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language. In a multi-deviant oddball paradigm, native and nonnative speakers of Dutch were exposed to naturally-produced Dutch vowels varying in speaker, sex, accent, and phoneme identity. Unexpectedly, the analysis of mismatch negativity (MMN) amplitudes elicited by each type of change shows a large degree of early perceptual sensitivity to non-linguistic cues. This finding on perception of naturally-produced stimuli contrasts with previous studies examining the perception of synthetic stimuli wherein adult listeners automatically disregard acoustic cues to speaker identity. The present finding bears relevance to speech normalization theories, suggesting that at an unattended level of processing, listeners are indeed sensitive to changes in fundamental frequency in natural speech tokens. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Bullying in children who stutter: speech-language pathologists' perceptions and intervention strategies.

    PubMed

    Blood, Gordon W; Boyle, Michael P; Blood, Ingrid M; Nalesnik, Gina R

    2010-06-01

    Bullying in school-age children is a global epidemic. School personnel play a critical role in eliminating this problem. The goals of this study were to examine speech-language pathologists' (SLPs) perceptions of bullying, endorsement of potential strategies for dealing with bullying, and associations among SLPs' responses and specific demographic and practice-oriented variables. A survey was developed and mailed to 1000 school-based SLPs. Six vignettes describing episodes of physical, verbal, and relational bullying of hypothetical 10-year students who stutter were developed. Three vignettes described bullying specifically mentioning stuttering behaviors, while three described bullying without mentioning stuttering behavior. The data from 475 SLPs were analyzed. SLPs rated physical bullying as most serious and in need of intervention, followed by verbal bullying. Relational bullying was rated as not serious or in need of intervention. SLPs also responded to the likelihood of using strategies for dealing with bullying. Physical and verbal bullying elicited the use of "talking with the teacher", "working with school personnel", and "reassuring the child of his safety" strategies. Relational bullying elicited "ignore the problem" and "be more assertive" strategies. Correlations among variables are reported. The seriousness of physical and verbal bullying, likelihood of intervention, and the lack of knowledge about relational bullying is discussed. Readers should be able to: (1) summarize the research describing the negative effects of three major types of bullying, (2) summarize the research describing bullying and children with communication disorders, especially stuttering, (3) report results of a survey of speech-language pathologists' (SLPs) perceptions of bullying in school-age children, (4) explain the perceived seriousness of the problem by SLPs and likelihood of intervention, and (5) describe the need for continued prevention and intervention activities for children who stutter. Copyright 2010 Elsevier Inc. All rights reserved.

  5. Parent and teacher perceptions of participation and outcomes in an intensive communication intervention for children with pragmatic language impairment.

    PubMed

    Baxendale, Janet; Lockton, Elaine; Adams, Catherine; Gaile, Jacqueline

    2013-01-01

    Treatment trials that enquire into parents' and teachers' views on speech-language interventions and outcomes for primary school-age children are relatively rare. The current study sought perceptions of the process of intervention and value placed on outcomes resulting from a trial of intervention, the Social Communication Intervention Project (SCIP), for children with communication disorders characterized by persistent needs in pragmatics and social use of language. To describe parent and teacher views around the process and experience of participating in SCIP intervention, including aspects of collaborative practice; and to gain understanding of parents' and teachers' perceptions of communication outcomes for children who had received intervention. Parents and teachers of eight children in the intervention arm of the SCIP study participated in semi-structured interviews with a researcher within 2 months of completion of SCIP intervention. The framework method of analysis was used to explore predetermined themes based around a list of topics informed by previous thinking and experience of the research. Parents and teachers perceived liaison with the SCIP speech and language therapist as being an important element of the intervention. Indirect approaches to liaison with parents were perceived as effective in transferring information as were brief meetings with teachers. Teachers and parents were able to make explicit links between therapy actions and perceived changes in the child. Work on comprehension monitoring and emotional vocabulary was perceived to be particularly effective with respect to communication outcomes. Parents also reflected that they had adopted different strategies towards communication and behaviour in the home as a result of intervention. The limits of potential change in terms of child communication in areas such as non-verbal communication and pragmatic skills were discussed by parents. This analysis has contributed essential information to the evaluation of SCIP by describing the experience of the intervention as delivered, exploring processes of effective implementation and change in the school setting and by describing the value placed on different outcomes by parents and teachers. These findings can inform planning for collaborations between speech and language therapists and teachers and provide useful information about mechanisms of change in different components of the SCIP intervention which have not been individually evaluated before. Information on changes in children's communication skills which were perceived as meaningful to those living and working with the children daily is crucial to the acceptance and translation of the SCIP intervention into practice. © 2012 Royal College of Speech & Language Therapists.

  6. Cross-Modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study

    ERIC Educational Resources Information Center

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2011-01-01

    During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich,…

  7. The Development of L2 Fluency during Study Abroad: A Cross-Language Study

    ERIC Educational Resources Information Center

    Di Silvio, Francesca; Diao, Wenhao; Donovan, Anne

    2016-01-01

    Examining speech samples from 75 American university students learning 1 of 3 languages (Mandarin, Russian, and Spanish), this article reports on a study of second language (L2) learners' oral fluency development and its relationship with their gains in holistic proficiency ratings during a semester abroad. In study abroad research, there is a…

  8. Assessing the Presence of Lexical Competition across Languages: Evidence from the Stroop Task

    ERIC Educational Resources Information Center

    Costa, Albert; Albareda, Barbara; Santesteban, Mikel

    2008-01-01

    Do the lexical representations of the non-response language enter into lexical competition during speech production? This issue has been studied by means of the picture-word interference paradigm in which two paradoxical effects have been observed. The so-called CROSS-LANGUAGE IDENTITY EFFECT (Costa, Miozzo and Caramazza, 1999) has been taken as…

  9. Issues in the development of cross-cultural assessments of speech and language for children.

    PubMed

    Carter, Julie A; Lees, Janet A; Murira, Gladys M; Gona, Joseph; Neville, Brian G R; Newton, Charles R J C

    2005-01-01

    There is an increasing demand for the assessment of speech and language in clinical and research situations in countries where there are few assessment resources. Due to the nature of cultural variation and the potential for cultural bias, new assessment tools need to be developed or existing tools require adaptation. However, there are few guidelines on how to develop 'culturally appropriate' assessment tools. To review the literature on cross-cultural assessment in order to identify the major issues in the development and adaptation of speech and language assessments for children and to illustrate these issues with practical examples from our own research programme in Kenya. Five broad categories pertaining to cross-cultural assessment development were identified: the influence of culture on performance, familiarity with the testing situation, the effect of formal education, language issues and picture recognition. It was outlined how some of these issues were addressed in our research. The results of the review were integrated to produce a list of ten guidelines highlighting the importance of collaboration with mother tongue speakers; piloting; familiar assessment materials; assessment location; and practice items and prompts. There are few clinicians and assessors, whether in the UK or abroad, who do not assess or treat children from a culture different to their own. Awareness of cultural variation and bias and cooperative efforts to develop and administer culturally appropriate assessment tools are the foundation of effective, valid treatment programmes.

  10. [Modeling developmental aspects of sensorimotor control of speech production].

    PubMed

    Kröger, B J; Birkholz, P; Neuschaefer-Rube, C

    2007-05-01

    Detailed knowledge of the neurophysiology of speech acquisition is important for understanding the developmental aspects of speech perception and production and for understanding developmental disorders of speech perception and production. A computer implemented neural model of sensorimotor control of speech production was developed. The model is capable of demonstrating the neural functions of different cortical areas during speech production in detail. (i) Two sensory and two motor maps or neural representations and the appertaining neural mappings or projections establish the sensorimotor feedback control system. These maps and mappings are already formed and trained during the prelinguistic phase of speech acquisition. (ii) The feedforward sensorimotor control system comprises the lexical map (representations of sounds, syllables, and words of the first language) and the mappings from lexical to sensory and to motor maps. The training of the appertaining mappings form the linguistic phase of speech acquisition. (iii) Three prelinguistic learning phases--i. e. silent mouthing, quasi stationary vocalic articulation, and realisation of articulatory protogestures--can be defined on the basis of our simulation studies using the computational neural model. These learning phases can be associated with temporal phases of prelinguistic speech acquisition obtained from natural data. The neural model illuminates the detailed function of specific cortical areas during speech production. In particular it can be shown that developmental disorders of speech production may result from a delayed or incorrect process within one of the prelinguistic learning phases defined by the neural model.

  11. Facilitating Inclusion in Early Childhood Settings: Interdisciplinary Preservice Preparation.

    ERIC Educational Resources Information Center

    Harrison, Melody F.; Able-Boone, Harriet; West, Tracey A.

    2001-01-01

    An interdisciplinary practicum case study is presented to illustrate components of a specialized preservice preparation for graduate students (n=44) in audiology, early childhood special education, school psychology, and speech-language pathology, designed to assist them in becoming inclusion collaborators/facilitators. Students' perceptions of…

  12. Perception of resyllabification in French.

    PubMed

    Gaskell, M Gareth; Spinelli, Elsa; Meunier, Fanny

    2002-07-01

    In three experiments, we examined the effects of phonological resyllabification processes on the perception of French speech. Enchainment involves the resyllabification of a word-final consonant across a syllable boundary (e.g., in chaque avion, the /k/ crosses the syllable boundary to become syllable initial). Liaison involves a further process of realization of a latent consonant, alongside resyllabification (e.g., the /t/ in petit avion). If the syllable is a dominant unit of perception in French (Mehler, Dommergues, Frauenfelder, & Segui, 1981), these processes should cause problems for recognition of the following word. A cross-modal priming experiment showed no cost attached to either type of resyllabification in terms of reduced activation of the following word. Furthermore, word- and sequence-monitoring experiments again showed no cost and suggested that the recognition of vowel-initial words may be facilitated when they are preceded by a word that had undergone resyllabification through enchainment or liaison. We examine the sources of information that could underpin facilitation and propose a refinement of the syllable's role in the perception of French speech.

  13. A music perception disorder (congenital amusia) influences speech comprehension.

    PubMed

    Liu, Fang; Jiang, Cunmei; Wang, Bei; Xu, Yi; Patel, Aniruddh D

    2015-01-01

    This study investigated the underlying link between speech and music by examining whether and to what extent congenital amusia, a musical disorder characterized by degraded pitch processing, would impact spoken sentence comprehension for speakers of Mandarin, a tone language. Sixteen Mandarin-speaking amusics and 16 matched controls were tested on the intelligibility of news-like Mandarin sentences with natural and flat fundamental frequency (F0) contours (created via speech resynthesis) under four signal-to-noise (SNR) conditions (no noise, +5, 0, and -5dB SNR). While speech intelligibility in quiet and extremely noisy conditions (SNR=-5dB) was not significantly compromised by flattened F0, both amusic and control groups achieved better performance with natural-F0 sentences than flat-F0 sentences under moderately noisy conditions (SNR=+5 and 0dB). Relative to normal listeners, amusics demonstrated reduced speech intelligibility in both quiet and noise, regardless of whether the F0 contours of the sentences were natural or flattened. This deficit in speech intelligibility was not associated with impaired pitch perception in amusia. These findings provide evidence for impaired speech comprehension in congenital amusia, suggesting that the deficit of amusics extends beyond pitch processing and includes segmental processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Pitch expertise is not created equal: Cross-domain effects of musicianship and tone language experience on neural and behavioural discrimination of speech and music.

    PubMed

    Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain

    2015-05-01

    Psychophysiological evidence supports a music-language association, such that experience in one domain can impact processing required in the other domain. We investigated the bidirectionality of this association by measuring event-related potentials (ERPs) in native English-speaking musicians, native tone language (Cantonese) nonmusicians, and native English-speaking nonmusician controls. We tested the degree to which pitch expertise stemming from musicianship or tone language experience similarly enhances the neural encoding of auditory information necessary for speech and music processing. Early cortical discriminatory processing for music and speech sounds was characterized using the mismatch negativity (MMN). Stimuli included 'large deviant' and 'small deviant' pairs of sounds that differed minimally in pitch (fundamental frequency, F0; contrastive musical tones) or timbre (first formant, F1; contrastive speech vowels). Behavioural F0 and F1 difference limen tasks probed listeners' perceptual acuity for these same acoustic features. Musicians and Cantonese speakers performed comparably in pitch discrimination; only musicians showed an additional advantage on timbre discrimination performance and an enhanced MMN responses to both music and speech. Cantonese language experience was not associated with enhancements on neural measures, despite enhanced behavioural pitch acuity. These data suggest that while both musicianship and tone language experience enhance some aspects of auditory acuity (behavioural pitch discrimination), musicianship confers farther-reaching enhancements to auditory function, tuning both pitch and timbre-related brain processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Pitch Perception in the First Year of Life, a Comparison of Lexical Tones and Musical Pitch.

    PubMed

    Chen, Ao; Stevens, Catherine J; Kager, René

    2017-01-01

    Pitch variation is pervasive in speech, regardless of the language to which infants are exposed. Lexical tone is influenced by general sensitivity to pitch. We examined whether the development in lexical tone perception may develop in parallel with perception of pitch in other cognitive domains namely music. Using a visual fixation paradigm, 100 and one 4- and 12-month-old Dutch infants were tested on their discrimination of Chinese rising and dipping lexical tones as well as comparable three-note musical pitch contours. The 4-month-old infants failed to show a discrimination effect in either condition, whereas the 12-month-old infants succeeded in both conditions. These results suggest that lexical tone perception may reflect and relate to general pitch perception abilities, which may serve as a basis for developing more complex language and musical skills.

  16. The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language

    PubMed Central

    Poeppel, David

    2012-01-01

    Research on the brain basis of speech and language faces theoretical and empirical challenges. The majority of current research, dominated by imaging, deficit-lesion, and electrophysiological techniques, seeks to identify regions that underpin aspects of language processing such as phonology, syntax, or semantics. The emphasis lies on localization and spatial characterization of function. The first part of the paper deals with a practical challenge that arises in the context of such a research program. This maps problem concerns the extent to which spatial information and localization can satisfy the explanatory needs for perception and cognition. Several areas of investigation exemplify how the neural basis of speech and language is discussed in those terms (regions, streams, hemispheres, networks). The second part of the paper turns to a more troublesome challenge, namely how to formulate the formal links between neurobiology and cognition. This principled problem thus addresses the relation between the primitives of cognition (here speech, language) and neurobiology. Dealing with this mapping problem invites the development of linking hypotheses between the domains. The cognitive sciences provide granular, theoretically motivated claims about the structure of various domains (the ‘cognome’); neurobiology, similarly, provides a list of the available neural structures. However, explanatory connections will require crafting computationally explicit linking hypotheses at the right level of abstraction. For both the practical maps problem and the principled mapping problem, developmental approaches and evidence can play a central role in the resolution. PMID:23017085

  17. Phonetic diversity, statistical learning, and acquisition of phonology.

    PubMed

    Pierrehumbert, Janet B

    2003-01-01

    In learning to perceive and produce speech, children master complex language-specific patterns. Daunting language-specific variation is found both in the segmental domain and in the domain of prosody and intonation. This article reviews the challenges posed by results in phonetic typology and sociolinguistics for the theory of language acquisition. It argues that categories are initiated bottom-up from statistical modes in use of the phonetic space, and sketches how exemplar theory can be used to model the updating of categories once they are initiated. It also argues that bottom-up initiation of categories is successful thanks to the perception-production loop operating in the speech community. The behavior of this loop means that the superficial statistical properties of speech available to the infant indirectly reflect the contrastiveness and discriminability of categories in the adult grammar. The article also argues that the developing system is refined using internal feedback from type statistics over the lexicon, once the lexicon is well-developed. The application of type statistics to a system initiated with surface statistics does not cause a fundamental reorganization of the system. Instead, it exploits confluences across levels of representation which characterize human language and make bootstrapping possible.

  18. Volunteer involvement in the support of self-managed computerised aphasia treatment: The volunteer perspective.

    PubMed

    Palmer, Rebecca; Enderby, Pam

    2016-10-01

    The speech-language pathology profession has explored a number of approaches to support efficient delivery of interventions for people with stroke-induced aphasia. This study aimed to explore the role of volunteers in supporting self-managed practice of computerised language exercises. A qualitative interview study of the volunteer support role was carried out alongside a pilot randomised controlled trial of computer aphasia therapy. Patients with aphasia practised computer exercises tailored for them by a speech-language pathologist at home regularly for 5 months. Eight of the volunteers who supported the intervention took part in semi-structured interviews. Interviews were audio recorded, transcribed verbatim and analysed thematically. Emergent themes included: training and support requirements; perception of the volunteer role; challenges facing the volunteer, in general and specifically related to supporting computer therapy exercises. The authors concluded that volunteers helped to motivate patients to practise their computer therapy exercises and also provided support to the carers. Training and ongoing structured support of therapy activity and conduct is required from a trained speech-language pathologist to ensure the successful involvement of volunteers supporting impairment-based computer exercises in patients' own homes.

  19. Similar frequency of the McGurk effect in large samples of native Mandarin Chinese and American English speakers.

    PubMed

    Magnotti, John F; Basu Mallick, Debshila; Feng, Guo; Zhou, Bin; Zhou, Wen; Beauchamp, Michael S

    2015-09-01

    Humans combine visual information from mouth movements with auditory information from the voice to recognize speech. A common method for assessing multisensory speech perception is the McGurk effect: When presented with particular pairings of incongruent auditory and visual speech syllables (e.g., the auditory speech sounds for "ba" dubbed onto the visual mouth movements for "ga"), individuals perceive a third syllable, distinct from the auditory and visual components. Chinese and American cultures differ in the prevalence of direct facial gaze and in the auditory structure of their languages, raising the possibility of cultural- and language-related group differences in the McGurk effect. There is no consensus in the literature about the existence of these group differences, with some studies reporting less McGurk effect in native Mandarin Chinese speakers than in English speakers and others reporting no difference. However, these studies sampled small numbers of participants tested with a small number of stimuli. Therefore, we collected data on the McGurk effect from large samples of Mandarin-speaking individuals from China and English-speaking individuals from the USA (total n = 307) viewing nine different stimuli. Averaged across participants and stimuli, we found similar frequencies of the McGurk effect between Chinese and American participants (48 vs. 44 %). In both groups, we observed a large range of frequencies both across participants (range from 0 to 100 %) and stimuli (15 to 83 %) with the main effect of culture and language accounting for only 0.3 % of the variance in the data. High individual variability in perception of the McGurk effect necessitates the use of large sample sizes to accurately estimate group differences.

  20. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    PubMed

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

Top