Increased Activation in Superior Temporal Gyri as a Function of Increment in Phonetic Features
ERIC Educational Resources Information Center
Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten
2011-01-01
A common assumption is that phonetic sounds initiate unique processing in the superior temporal gyri and sulci (STG/STS). The anatomical areas subserving these processes are also implicated in the processing of non-phonetic stimuli such as music instrument sounds. The differential processing of phonetic and non-phonetic sounds was investigated in…
Word-level information influences phonetic learning in adults and infants
Feldman, Naomi H.; Myers, Emily B.; White, Katherine S.; Griffiths, Thomas L.; Morgan, James L.
2013-01-01
Infants begin to segment words from fluent speech during the same time period that they learn phonetic categories. Segmented words can provide a potentially useful cue for phonetic learning, yet accounts of phonetic category acquisition typically ignore the contexts in which sounds appear. We present two experiments to show that, contrary to the assumption that phonetic learning occurs in isolation, learners are sensitive to the words in which sounds appear and can use this information to constrain their interpretation of phonetic variability. Experiment 1 shows that adults use word-level information in a phonetic category learning task, assigning acoustically similar vowels to different categories more often when those sounds consistently appear in different words. Experiment 2 demonstrates that eight-month-old infants similarly pay attention to word-level information and that this information affects how they treat phonetic contrasts. These findings suggest that phonetic category learning is a rich, interactive process that takes advantage of many different types of cues that are present in the input. PMID:23562941
Effects and modeling of phonetic and acoustic confusions in accented speech.
Fung, Pascale; Liu, Yi
2005-11-01
Accented speech recognition is more challenging than standard speech recognition due to the effects of phonetic and acoustic confusions. Phonetic confusion in accented speech occurs when an expected phone is pronounced as a different one, which leads to erroneous recognition. Acoustic confusion occurs when the pronounced phone is found to lie acoustically between two baseform models and can be equally recognized as either one. We propose that it is necessary to analyze and model these confusions separately in order to improve accented speech recognition without degrading standard speech recognition. Since low phonetic confusion units in accented speech do not give rise to automatic speech recognition errors, we focus on analyzing and reducing phonetic and acoustic confusability under high phonetic confusion conditions. We propose using likelihood ratio test to measure phonetic confusion, and asymmetric acoustic distance to measure acoustic confusion. Only accent-specific phonetic units with low acoustic confusion are used in an augmented pronunciation dictionary, while phonetic units with high acoustic confusion are reconstructed using decision tree merging. Experimental results show that our approach is effective and superior to methods modeling phonetic confusion or acoustic confusion alone in accented speech, with a significant 5.7% absolute WER reduction, without degrading standard speech recognition.
Negotiating towards a next turn: phonetic resources for 'doing the same'.
Sikveland, Rein Ove
2012-03-01
This paper investigates hearers' use of response tokens (back-channels), in maintaining and differentiating their actions. Initial observations suggest that hearers produce a sequence of phonetically similar responses to disengage from the current topic, and dissimilar responses to engage with the current topic. This is studied systematically by combining detailed interactional and phonetic analysis in a collection of naturally-occurring talk in Norwegian. The interactional analysis forms the basis for labeling actions as maintained ('doing the same') and differentiated ('NOT doing the same'), which is then used as a basis for phonetic analysis. The phonetic analysis shows that certain phonetic characteristics, including pitch, loudness, voice quality and articulatory characteristics, are associated with 'doing the same', as different from 'NOT doing the same'. Interactional analysis gives further evidence of how this differentiation is of systematic relevance in the negotiations of a next turn. This paper addresses phonetic variation and variability by focusing on the relationship between sequence and phonetics in the turn-by-turn development of meaning. This has important implications for linguistic/phonetic research, and for the study of back-channels.
Maxent Harmonic Grammars and Phonetic Duration
ERIC Educational Resources Information Center
Lefkowitz, Lee Michael
2017-01-01
Research in phonetics has established the grammatical status of gradient phonetic patterns in language, suggesting that there is a component of the grammar that governs systematic relationships between discrete phonological representations and gradiently continuous acoustic or articulatory phonetic representations. This dissertation joins several…
Phonetic imitation by young children and its developmental changes.
Nielsen, Kuniko
2014-12-01
In the current study, the author investigated the developmental course of phonetic imitation in childhood, and further evaluated existing accounts of phonetic imitation. Sixteen preschoolers, 15 third graders, and 18 college students participated in the current study. An experiment with a modified imitation paradigm with a picture-naming task was conducted, in which participants' voice-onset time (VOT) was compared before and after they were exposed to target speech with artificially increased VOT. Extended VOT in the target speech was imitated by preschoolers and 3rd graders as well as adults, confirming previous findings in phonetic imitation. Furthermore, an age effect of phonetic imitation was observed; namely, children showed greater imitation than adults, whereas the degree of imitation was comparable between preschoolers and 3rd graders. No significant effect of gender or word specificity was observed. Young children imitated fine phonetic details of the target speech, and greater degree of phonetic imitation was observed in children compared to adults. These findings suggest that the degree of phonetic imitation negatively correlates with phonological development.
Cataño, Lorena; Barlow, Jessica A.; Moyna, María Irene
2015-01-01
This study evaluates 39 different phonetic inventories of 16 Spanish-speaking children (ages 0;11 to 5;1) in terms of hierarchical complexity. Phonetic featural differences are considered in order to evaluate the proposed implicational hierarchy of Dinnsen et al.’s phonetic inventory typology for English. The children’s phonetic inventories are examined independently and in relation to one another. Five hierarchical complexity levels are proposed, similar to those of English and other languages, although with some language-specific differences. These findings have implications for theoretical assumptions about the universality of phonetic inventory development, and for remediation of Spanish-speaking children with phonological impairments. PMID:19504400
Statistical Inference in the Learning of Novel Phonetic Categories
ERIC Educational Resources Information Center
Zhao, Yuan
2010-01-01
Learning a phonetic category (or any linguistic category) requires integrating different sources of information. A crucial unsolved problem for phonetic learning is how this integration occurs: how can we update our previous knowledge about a phonetic category as we hear new exemplars of the category? One model of learning is Bayesian Inference,…
Infants Show a Facilitation Effect for Native Language Phonetic Perception between 6 and 12 Months
ERIC Educational Resources Information Center
Kuhl, Patricia K.; Stevens, Erica; Hayashi, Akiko; Deguchi, Toshisada; Kiritani, Shigeru; Iverson, Paul
2006-01-01
Patterns of developmental change in phonetic perception are critical to theory development. Many previous studies document a decline in nonnative phonetic perception between 6 and 12 months of age. However, much less experimental attention has been paid to developmental change in native-language phonetic perception over the same time period. We…
Phonetic Symbols through Audiolingual Method to Improve the Students' Listening Skill
ERIC Educational Resources Information Center
Samawiyah, Zuhrotun; Saifuddin, Muhammad
2016-01-01
Phonetic symbols present linguistics feature to how the words are pronounced or spelled and they offer a way to easily identify and recognize the words. Phonetic symbols were applied in this research to give the students clear input and a comprehension toward English words. Moreover, these phonetic symbols were applied within audio-lingual method…
Yeh, Su-Ling; Chou, Wei-Lun; Ho, Pokuan
2017-11-17
Most Chinese characters are compounds consisting of a semantic radical indicating semantic category and a phonetic radical cuing the pronunciation of the character. Controversy surrounds whether radicals also go through the same lexical processing as characters and, critically, whether phonetic radicals involve semantic activation since they can also be characters when standing alone. Here we examined these issues using the Stroop task whereby participants responded to the ink color of the character. The key finding was that Stroop effects were found when the character itself had a meaning unrelated to color, but contained a color name phonetic radical (e.g., "guess", with the phonetic radical "cyan", on the right) or had a meaning associated with color (e.g., "pity", with the phonetic radical "blood" on the right which has a meaning related to "red"). Such Stroop effects from the phonetic radical within a character unrelated to color support that Chinese character recognition involves decomposition of characters into their constituent radicals; with each of their meanings including phonetic radicals activated independently, even though it would inevitably interfere with that of the whole character. Compared with the morphological decomposition in English whereby the semantics of the morphemes are not necessarily activated, the unavoidable semantic activation of phonetic radicals represents a unique feature in Chinese character processing.
ERIC Educational Resources Information Center
International Phonetic Association.
This guide contains concise information on the International Phonetic Alphabet, a universally agreed system of notation in use for over a century, for the sounds of languages, and guidance on how to use it. The handbook replaces the previous edition, "Principles of the International Phonetic Association," which has not been revised since 1949.…
Developing a Weighted Measure of Speech Sound Accuracy
Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.
2010-01-01
Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344
Sign Lowering and Phonetic Reduction in American Sign Language.
Tyrone, Martha E; Mauk, Claude E
2010-04-01
This study examines sign lowering as a form of phonetic reduction in American Sign Language. Phonetic reduction occurs in the course of normal language production, when instead of producing a carefully articulated form of a word, the language user produces a less clearly articulated form. When signs are produced in context by native signers, they often differ from the citation forms of signs. In some cases, phonetic reduction is manifested as a sign being produced at a lower location than in the citation form. Sign lowering has been documented previously, but this is the first study to examine it in phonetic detail. The data presented here are tokens of the sign WONDER, as produced by six native signers, in two phonetic contexts and at three signing rates, which were captured by optoelectronic motion capture. The results indicate that sign lowering occurred for all signers, according to the factors we manipulated. Sign production was affected by several phonetic factors that also influence speech production, namely, production rate, phonetic context, and position within an utterance. In addition, we have discovered interesting variations in sign production, which could underlie distinctions in signing style, analogous to accent or voice quality in speech.
Developing a weighted measure of speech sound accuracy.
Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J
2011-02-01
To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.
Yu, Alan C L; Abrego-Collier, Carissa; Sonderegger, Morgan
2013-01-01
Numerous studies have documented the phenomenon of phonetic imitation: the process by which the production patterns of an individual become more similar on some phonetic or acoustic dimension to those of her interlocutor. Though social factors have been suggested as a motivator for imitation, few studies has established a tight connection between language-external factors and a speaker's likelihood to imitate. The present study investigated the phenomenon of phonetic imitation using a within-subject design embedded in an individual-differences framework. Participants were administered a phonetic imitation task, which included two speech production tasks separated by a perceptual learning task, and a battery of measures assessing traits associated with Autism-Spectrum Condition, working memory, and personality. To examine the effects of subjective attitude on phonetic imitation, participants were randomly assigned to four experimental conditions, where the perceived sexual orientation of the narrator (homosexual vs. heterosexual) and the outcome (positive vs. negative) of the story depicted in the exposure materials differed. The extent of phonetic imitation by an individual is significantly modulated by the story outcome, as well as by the participant's subjective attitude toward the model talker, the participant's personality trait of openness and the autistic-like trait associated with attention switching.
A role for the developing lexicon in phonetic category acquisition
Feldman, Naomi H.; Griffiths, Thomas L.; Goldwater, Sharon; Morgan, James L.
2013-01-01
Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning. PMID:24219848
Interaction and Representational Integration: Evidence from Speech Errors
ERIC Educational Resources Information Center
Goldrick, Matthew; Baker, H. Ross; Murphy, Amanda; Baese-Berk, Melissa
2011-01-01
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated…
Automated Analysis of Child Phonetic Production Using Naturalistic Recordings
ERIC Educational Resources Information Center
Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill
2014-01-01
Purpose: Conventional resource-intensive methods for child phonetic development studies are often impractical for sampling and analyzing child vocalizations in sufficient quantity. The purpose of this study was to provide new information on early language development by an automated analysis of child phonetic production using naturalistic…
Phonetics and Other Disciplines: Then and Now.
ERIC Educational Resources Information Center
Bronstein, Arthur J.; Raphael, Lawrence J.
Phonetic science is becoming increasingly interdisciplinary. Phoneticians rely on, or at least collaborate with, sociologists, psychologists, biologists, poets, physicists, anthropologists, neurologists and others. A look at the history of phonetics reveals that this seemingly recent trend has deep roots. In fact, it is possible to draw parallels…
An Experimental Investigation of Phonetic Naturalness
ERIC Educational Resources Information Center
Greenwood, Anna
2016-01-01
This dissertation begins with the observation of a typological asymmetry within phonological patterns related to phonetic naturalness. Patterns that are rooted within existing tendencies of perception and/or production--in other words, patterns that are phonetically "natural"--are common in phonological typology and seen in a variety of…
Influence of Role-Switching on Phonetic Convergence in Conversation
ERIC Educational Resources Information Center
Pardo, Jennifer S.; Jay, Isabel Cajori; Hoshino, Risa; Hasbun, Sara Maria; Sowemimo-Coker, Chantal; Krauss, Robert M.
2013-01-01
The current study examined phonetic convergence when talkers alternated roles during conversational interaction. The talkers completed a map navigation task in which they alternated instruction Giver and Receiver roles across multiple map pairs. Previous studies found robust effects of the role of a talker on phonetic convergence, and it was…
Does the Recording Medium Influence Phonetic Transcription of Cleft Palate Speech?
ERIC Educational Resources Information Center
Klintö, Kristina; Lohmander, Anette
2017-01-01
Background: In recent years, analyses of cleft palate speech based on phonetic transcriptions have become common. However, the results vary considerably among different studies. It cannot be excluded that differences in assessment methodology, including the recording medium, influence the results. Aims: To compare phonetic transcriptions from…
ERIC Educational Resources Information Center
Zee, Eric
A phonetic study of vowel devoicing in the Shanghai dialect of Chinese explored the phonetic conditions under which the high, closed vowels and the apical vowel in Shanghai are most likely to become devoiced. The phonetic conditions may be segmental or suprasegmental. Segmentally, the study sought to determine whether a certain type of pre-vocalic…
Ethnicity and Phonetic Variation in a San Francisco Neighborhood
ERIC Educational Resources Information Center
Hall-Lew, Lauren
2009-01-01
This dissertation advances research in sociolinguistics by analyzing phonetic variation in a majority Asian American community in San Francisco, California. As one of the first community studies focusing on Asian Americans in an urban US context, this work speaks to ongoing discussions about speaker ethnicity, phonetic variation, and regional…
Measuring Language-Specific Phonetic Settings
ERIC Educational Resources Information Center
Mennen, Ineke; Scobbie, James M.; de Leeuw, Esther; Schaeffler, Sonja; Schaeffler, Felix
2010-01-01
While it is well known that languages have different phonemes and phonologies, there is growing interest in the idea that languages may also differ in their "phonetic setting". The term "phonetic setting" refers to a tendency to make the vocal apparatus employ a language-specific habitual configuration. For example, languages may differ in their…
Yu, Alan C. L.; Abrego-Collier, Carissa; Sonderegger, Morgan
2013-01-01
Numerous studies have documented the phenomenon of phonetic imitation: the process by which the production patterns of an individual become more similar on some phonetic or acoustic dimension to those of her interlocutor. Though social factors have been suggested as a motivator for imitation, few studies has established a tight connection between language-external factors and a speaker’s likelihood to imitate. The present study investigated the phenomenon of phonetic imitation using a within-subject design embedded in an individual-differences framework. Participants were administered a phonetic imitation task, which included two speech production tasks separated by a perceptual learning task, and a battery of measures assessing traits associated with Autism-Spectrum Condition, working memory, and personality. To examine the effects of subjective attitude on phonetic imitation, participants were randomly assigned to four experimental conditions, where the perceived sexual orientation of the narrator (homosexual vs. heterosexual) and the outcome (positive vs. negative) of the story depicted in the exposure materials differed. The extent of phonetic imitation by an individual is significantly modulated by the story outcome, as well as by the participant’s subjective attitude toward the model talker, the participant’s personality trait of openness and the autistic-like trait associated with attention switching. PMID:24098665
The interaction of short-term and long-term memory in phonetic category formation
NASA Astrophysics Data System (ADS)
Harnsberger, James D.
2002-05-01
This study examined the role that short-term memory capacity plays in the relationship between novel stimuli (e.g., non-native speech sounds, native nonsense words) and phonetic categories in long-term memory. Thirty native speakers of American English were administered five tests: categorial AXB discrimination using nasal consonants from Malayalam; categorial identification, also using Malayalam nasals, which measured the influence of phonetic categories in long-term memory; digit span; nonword span, a short-term memory measure mediated by phonetic categories in long-term memory; and paired-associate word learning (word-word and word-nonword pairs). The results showed that almost all measures were significantly correlated with one another. The strongest predictor for the discrimination and word-nonword learning results was nonword (r=+0.62) and digit span (r=+0.51), respectively. When the identification test results were partialed out, only nonword span significantly correlated with discrimination. The results show a strong influence of short-term memory capacity on the encoding of phonetic detail within phonetic categories and suggest that long-term memory representations regulate the capacity of short-term memory to preserve information for subsequent encoding. The results of this study will also be discussed with regards to resolving the tension between episodic and abstract models of phonetic category structure.
Phonetic compliance: a proof-of-concept study
Delvaux, Véronique; Huet, Kathy; Piccaluga, Myriam; Harmegnies, Bernard
2014-01-01
In this paper, we introduce the concept of “phonetic compliance,” which is defined as the intrinsic individual ability to produce speech sounds that are unusual in the native language, and constitutes a part of the ability to acquire L2 phonetics and phonology. We argue that phonetic compliance represents a systematic source of variance that needs to be accounted for if one wants to improve the control over the independent variables manipulated in SLA experimental studies. We then present the results of a two-fold proof-of-concept study aimed at testing the feasibility of assessing phonetic compliance in terms of gradient. In study 1, a pilot data collection paradigm is implemented on an occasional sample of 10 native French speakers engaged in two reproduction tasks involving respectively vowels and aspirated stops, and data are analyzed using descriptive statistics. In study 2, complementary data including L1-typical realizations are collected, resulting in the development of a first set of indicators that may be useful to appropriately assess, and further refine the concept of, phonetic compliance. Based on a critical analysis of the contributions and limitations of the proof-of-concept study, general discussion formulates the guidelines for the following stages of development of a reliable and valid test of phonetic compliance. PMID:25538645
Li, Luan; Wang, Hua-Chen; Castles, Anne; Hsieh, Miao-Ling; Marinus, Eva
2018-07-01
According to the self-teaching hypothesis (Share, 1995), phonological decoding is fundamental to acquiring orthographic representations of novel written words. However, phonological decoding is not straightforward in non-alphabetic scripts such as Chinese, where words are presented as characters. Here, we present the first study investigating the role of phonological decoding in orthographic learning in Chinese. We examined two possible types of phonological decoding: the use of phonetic radicals, an internal phonological aid, andthe use of Zhuyin, an external phonological coding system. Seventy-three Grade 2 children were taught the pronunciations and meanings of twelve novel compound characters over four days. They were then exposed to the written characters in short stories, and were assessed on their reading accuracy and on their subsequent orthographic learning via orthographic choice and spelling tasks. The novel characters were assigned three different types of pronunciation in relation to its phonetic radical - (1) a pronunciation that is identical to the phonetic radical in isolation; (2) a common alternative pronunciation associated with the phonetic radical when it appears in other characters; and (3) a pronunciation that is unrelated to the phonetic radical. The presence of Zhuyin was also manipulated. The children read the novel characters more accurately when phonological cues from the phonetic radicals were available and in the presence of Zhuyin. However, only the phonetic radicals facilitated orthographic learning. The findings provide the first empirical evidence of orthographic learning via self-teaching in Chinese, and reveal how phonological decoding functions to support learning in non-alphabetic writing systems. Copyright © 2018 Elsevier B.V. All rights reserved.
Phonetic Training in the Foreign Language Curriculum
ERIC Educational Resources Information Center
Burnham, Kevin R.
2014-01-01
In this experiment we evaluate phonetic training as a tool for language learning. Specifically, we take a group of native speakers (NS) of English (n=24) currently enrolled in Arabic classes at American universities, and evaluate the effectiveness of a high variability phonetic training program (HVPT) to improve their perception of a difficult…
Phonetic Training in the Foreign Language Curriculum
ERIC Educational Resources Information Center
Burnham, Kevin R.
2014-01-01
In this experiment we evaluate phonetic training as a tool for language learning. Specifically, we take a group of native speakers (NS) of English (n = 24) currently enrolled in Arabic classes at American universities, and evaluate the effectiveness of a high variability phonetic training program (HVPT) to improve their perception of a difficult…
READINESS AND PHONETIC ANALYSIS OF WORDS IN GRADES K-2.
ERIC Educational Resources Information Center
CAMPBELL, BONNIE; QUINN, GOLDIE
THE METHOD USED AT THE BELLEVUE, NEBRASKA, PUBLIC SCHOOLS TO TEACH READING READINESS AND THE PHONETIC ANALYSIS OF WORDS IN KINDERGARTEN THROUGH GRADE TWO IS DESCRIBED. SUGGESTIONS FOR TEACHING THE READINESS SKILLS OF AUDITORY AND VISUAL PERCEPTION, VOCABULARY SKILLS OF WORD RECOGNITION AND WORD MEANING, AND THE PHONETIC ANALYSIS OF WORDS IN GRADES…
Repair Sequences in Dysarthric Conversational Speech: A Study in Interactional Phonetics
ERIC Educational Resources Information Center
Rutter, Ben
2009-01-01
This paper presents some findings from a case study of repair sequences in conversations between a dysarthric speaker, Chris, and her interactional partners. It adopts the methodology of interactional phonetics, where turn design, sequence organization, and variation in phonetic parameters are analysed in unison. The analysis focused on the use of…
Predicting Phonetic Transcription Agreement: Insights from Research in Infant Vocalizations
ERIC Educational Resources Information Center
Ramsdell, Heather L.; Oller, D. Kimbrough; Ethington, Corinna A.
2007-01-01
The purpose of this study is to provide new perspectives on correlates of phonetic transcription agreement. Our research focuses on phonetic transcription and coding of infant vocalizations. The findings are presumed to be broadly applicable to other difficult cases of transcription, such as found in severe disorders of speech, which similarly…
Segments, Letters and Gestures: Thoughts on Doing and Teaching Phonetics and Transcription
ERIC Educational Resources Information Center
Muller, Nicole; Papakyritsis, Ioannis
2011-01-01
This brief article reflects on some pitfalls inherent in the learning and teaching of segmental phonetic transcription. We suggest that a gestural interpretation to disordered speech data, in conjunction with segmental phonetic transcription, can add valuable insight into patterns of disordered speech, and that a gestural orientation should form…
ERIC Educational Resources Information Center
Matthews, Claire
1991-01-01
A patient with chronic agrammatic Broca's aphasia exhibited deep dyslexia and was treated with functional reorganization of the phonetic route of reading, with the patient learning consciously to control formerly automatic behaviors. The patient's responses indicated that the phonetic route encompasses at least two dissociable functions:…
Inhibitory phonetic priming: Where does the effect come from?
Dufour, Sophie; Frauenfelder, Ulrich Hans
2016-01-01
Both phonological and phonetic priming studies reveal inhibitory effects that have been interpreted as resulting from lexical competition between the prime and the target. We present a series of phonetic priming experiments that contrasted this lexical locus explanation with that of a prelexical locus by manipulating the lexical status of the prime and the target and the task used. In the related condition of all experiments, spoken targets were preceded by spoken primes that were phonetically similar but shared no phonemes with the target (/bak/-/dεt/). In Experiments 1 and 2, word and nonword primes produced an inhibitory effect of equal size in shadowing and same-different tasks respectively. Experiments 3 and 4 showed robust inhibitory phonetic priming on both word and nonword targets in the shadowing task, but no effect at all in a lexical decision task. Together, these findings show that the inhibitory phonetic priming effect occurs independently of the lexical status of both the prime and the target, and only in tasks that do not necessarily require the activation of lexical representations. Our study thus argues in favour of a prelexical locus for this effect.
Barth-Weingarten, Dagmar
2012-03-01
In grammar books, the various functions of and as phrasal coordinator and clausal conjunction are treated as standard knowledge. In addition, studies on the uses of and in everyday talk-in-interaction have described its discourse-organizational functions on a more global level. In the phonetic literature, in turn, a range of phonetic forms of and have been listed. Yet, so far few studies have related the phonetic features of and to its function. This contribution surveys a range of phonetic forms of and in a corpus of private American English telephone conversations. It shows that the use of forms such as [ænd], [εn], or [en], among others, is not random but, in essence, correlates with the syntactic-pragmatic scope of and and the cognitive closeness of the items the and connects. This, in turn, allows the phonetic design of and to contribute to the organization of turn-taking. The findings presented are based on conversation-analytic and interactional-linguistic methodology, which includes quantitative analyses.
Cues for Lexical Tone Perception in Children: Acoustic Correlates and Phonetic Context Effects
ERIC Educational Resources Information Center
Tong, Xiuli; McBride, Catherine; Burnham, Denis
2014-01-01
Purpose: The authors investigated the effects of acoustic cues (i.e., pitch height, pitch contour, and pitch onset and offset) and phonetic context cues (i.e., syllable onsets and rimes) on lexical tone perception in Cantonese-speaking children. Method: Eight minimum pairs of tonal contrasts were presented in either an identical phonetic context…
ERIC Educational Resources Information Center
Davidson, Lisa; Wilson, Colin
2016-01-01
Recent research has shown that speakers are sensitive to non-contrastive phonetic detail present in nonnative speech (e.g. Escudero et al. 2012; Wilson et al. 2014). Difficulties in interpreting and implementing unfamiliar phonetic variation can lead nonnative speakers to modify second language forms by vowel epenthesis and other changes. These…
ERIC Educational Resources Information Center
Pouplier, Marianne; Marin, Stefania; Waltl, Susanne
2014-01-01
Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…
Modelling the Architecture of Phonetic Plans: Evidence from Apraxia of Speech
ERIC Educational Resources Information Center
Ziegler, Wolfram
2009-01-01
In theories of spoken language production, the gestural code prescribing the movements of the speech organs is usually viewed as a linear string of holistic, encapsulated, hard-wired, phonetic plans, e.g., of the size of phonemes or syllables. Interactions between phonetic units on the surface of overt speech are commonly attributed to either the…
Effects of Phonetic Similarity in the Identification of Mandarin Tones
ERIC Educational Resources Information Center
Li, Bin; Shao, Jing; Bao, Mingzhen
2017-01-01
Tonal languages differ in how they use phonetic correlates, e.g. average pitch height and pitch direction, for tonal contrasts. Thus, native speakers of a tonal language may need to adjust their attention to familiar or unfamiliar phonetic cues when perceiving non-native tones. On the other hand, speakers of a non-tonal language may need to…
ERIC Educational Resources Information Center
Werfel, Krystal L.
2017-01-01
The purpose of this study was to evaluate the effects of phonetic transcription training on the explicit phonemic awareness of adults. Fifty undergraduate students enrolled in a phonetic transcription course and 107 control undergraduate students completed a paper-and-pencil measure of explicit phonemic awareness on the first and last days of…
DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1
NASA Astrophysics Data System (ADS)
Garofolo, J. S.; Lamel, L. F.; Fisher, W. M.; Fiscus, J. G.; Pallett, D. S.
1993-02-01
The Texas Instruments/Massachusetts Institute of Technology (TIMIT) corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems. TIMIT contains speech from 630 speakers representing 8 major dialect divisions of American English, each speaking 10 phonetically-rich sentences. The TIMIT corpus includes time-aligned orthographic, phonetic, and word transcriptions, as well as speech waveform data for each spoken sentence. The release of TIMIT contains several improvements over the Prototype CD-ROM released in December, 1988: (1) full 630-speaker corpus, (2) checked and corrected transcriptions, (3) word-alignment transcriptions, (4) NIST SPHERE-headered waveform files and header manipulation software, (5) phonemic dictionary, (6) new test and training subsets balanced for dialectal and phonetic coverage, and (7) more extensive documentation.
ERIC Educational Resources Information Center
Feizollahi, Zhaleh
2010-01-01
The phonetics-phonology interface has long been debated; some linguists argue for a modular approach (Keating 1984, Pierrehumbert 1990, Zsiga 1997, Cohn 1998), while others argue that there is no interface, and that phonetics and phonology are one and the same (Browman & Goldstein 1989-1992, Ohala 1990). Recent proposals by Gafos (2002), and…
ERIC Educational Resources Information Center
Ebrahimi, Pouria
2010-01-01
As a subfield of linguistics, phonetics and phonology have as their main axis the concern of articulation of sounds; that is, how human beings produce speech. Although dated back over 2000 years ago, modern contributions of scientists and scholars regarding phonetics and phonology have involved various fields of science and schools of thought such…
ERIC Educational Resources Information Center
Marty, Fernand
Three computer-based systems for phonetic/graphemic transcription of language are described, compared, and contrasted. The text is entirely in French, with examples given from the French language. The three approaches to transcription are: (1) text entered in standard typography and exiting in phonetic transcription with markers for rhythmic…
Conboy, Barbara T; Brooks, Rechele; Meltzoff, Andrew N; Kuhl, Patricia K
2015-01-01
Infants learn phonetic information from a second language with live-person presentations, but not television or audio-only recordings. To understand the role of social interaction in learning a second language, we examined infants' joint attention with live, Spanish-speaking tutors and used a neural measure of phonetic learning. Infants' eye-gaze behaviors during Spanish sessions at 9.5-10.5 months of age predicted second-language phonetic learning, assessed by an event-related potential measure of Spanish phoneme discrimination at 11 months. These data suggest a powerful role for social interaction at the earliest stages of learning a new language.
A novel probabilistic framework for event-based speech recognition
NASA Astrophysics Data System (ADS)
Juneja, Amit; Espy-Wilson, Carol
2003-10-01
One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.
Phonetics exercises using the Alvin experiment-control software.
Hillenbrand, James M; Gayvert, Robert T; Clark, Michael J
2015-04-01
Exercises are described that were designed to provide practice in phonetic transcription for students taking an introductory phonetics course. The goal was to allow instructors to offload much of the drill that would otherwise need to be covered in class or handled with paper-and-pencil tasks using text rather than speech as input. The exercises were developed using Alvin, a general-purpose software package for experiment design and control. The simplest exercises help students learn sound-symbol associations. For example, a vowel-transcription exercise presents listeners with consonant-vowel-consonant syllables on each trial; students are asked to choose among buttons labeled with phonetic symbols for 12 vowels. Several word-transcription exercises are included in which students hear a word and are asked to enter a phonetic transcription. Immediate feedback is provided for all of the exercises. An explanation of the methods that are used to create exercises is provided. Although no formal evaluation was conducted, comments on course evaluations suggest that most students found the exercises to be useful. Exercises were developed for use in an introductory phonetics course. The exercises can be used in their current form, they can be modified to suit individual needs, or new exercises can be developed.
ERIC Educational Resources Information Center
Barth-Weingarten, Dagmar
2012-01-01
In grammar books, the various functions of "and" as phrasal coordinator and clausal conjunction are treated as standard knowledge. In addition, studies on the uses of "and" in everyday talk-in-interaction have described its discourse-organizational functions on a more global level. In the phonetic literature, in turn, a range of phonetic forms of…
49 CFR Appendix A to Part 220 - Recommended Phonetic Alphabet
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Recommended Phonetic Alphabet A Appendix A to Part...—Recommended Phonetic Alphabet A—ALFA B—BRAVO C—CHARLIE D—DELTA E—ECHO F—FOXTROT G—GOLF H—HOTEL I—INDIA J...—VICTOR W—WHISKEY X—XRAY Y—YANKEE Z—ZULU The letter “ZULU” should be written as “Z” to distinguish it from...
49 CFR Appendix A to Part 220 - Recommended Phonetic Alphabet
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Recommended Phonetic Alphabet A Appendix A to Part...—Recommended Phonetic Alphabet A—ALFA B—BRAVO C—CHARLIE D—DELTA E—ECHO F—FOXTROT G—GOLF H—HOTEL I—INDIA J...—VICTOR W—WHISKEY X—XRAY Y—YANKEE Z—ZULU The letter “ZULU” should be written as “Z” to distinguish it from...
49 CFR Appendix A to Part 220 - Recommended Phonetic Alphabet
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Recommended Phonetic Alphabet A Appendix A to Part...—Recommended Phonetic Alphabet A—ALFA B—BRAVO C—CHARLIE D—DELTA E—ECHO F—FOXTROT G—GOLF H—HOTEL I—INDIA J...—VICTOR W—WHISKEY X—XRAY Y—YANKEE Z—ZULU The letter “ZULU” should be written as “Z” to distinguish it from...
49 CFR Appendix A to Part 220 - Recommended Phonetic Alphabet
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Recommended Phonetic Alphabet A Appendix A to Part...—Recommended Phonetic Alphabet A—ALFA B—BRAVO C—CHARLIE D—DELTA E—ECHO F—FOXTROT G—GOLF H—HOTEL I—INDIA J...—VICTOR W—WHISKEY X—XRAY Y—YANKEE Z—ZULU The letter “ZULU” should be written as “Z” to distinguish it from...
49 CFR Appendix A to Part 220 - Recommended Phonetic Alphabet
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Recommended Phonetic Alphabet A Appendix A to Part...—Recommended Phonetic Alphabet A—ALFA B—BRAVO C—CHARLIE D—DELTA E—ECHO F—FOXTROT G—GOLF H—HOTEL I—INDIA J...—VICTOR W—WHISKEY X—XRAY Y—YANKEE Z—ZULU The letter “ZULU” should be written as “Z” to distinguish it from...
Speech Synthesis Using Perceptually Motivated Features
2012-01-23
with others a few years prior (with the concurrence of the project’s program manager. Willard Larkin). The Perceptual Flow of Phonetic Information and...34The Perceptual Flow of Phonetic Processing," consonant confusion matrices are analyzed for patterns of phonetic-feature decoding errors conditioned...decoding) is also observed. From these conditional probability patterns, it is proposed that they reflect a temporal flow of perceptual processing
ERIC Educational Resources Information Center
Kelly, Spencer D.; Lee, Angela L.
2012-01-01
It is now widely accepted that hand gestures help people understand and learn language. Here, we provide an exception to this general rule--when phonetic demands are high, gesture actually hurts. Native English-speaking adults were instructed on the meaning of novel Japanese word pairs that were for non-native speakers phonetically hard (/ite/ vs.…
Conboy, Barbara T.; Brooks, Rechele; Meltzoff, Andrew N.; Kuhl, Patricia K.
2015-01-01
Infants learn phonetic information from a second language with live-person presentations, but not television or audio-only recordings. To understand the role of social interaction in learning a second language, we examined infants’ joint attention with live, Spanish-speaking tutors and used a neural measure of phonetic learning. Infants’ eye-gaze behaviors during Spanish sessions at 9.5 – 10.5 months of age predicted second-language phonetic learning, assessed by an event-related potential (ERP) measure of Spanish phoneme discrimination at 11 months. These data suggest a powerful role for social interaction at the earliest stages of learning a new language. PMID:26179488
Graphemes Sharing Phonetic Features Tend to Induce Similar Synesthetic Colors.
Kang, Mi-Jeong; Kim, Yeseul; Shin, Ji-Young; Kim, Chai-Youn
2017-01-01
Individuals with grapheme-color synesthesia experience idiosyncratic colors when viewing achromatic letters or digits. Despite large individual differences in grapheme-color association, synesthetes tend to associate graphemes sharing a perceptual feature with similar synesthetic colors. Sound has been suggested as one such feature. In the present study, we investigated whether graphemes of which representative phonemes have similar phonetic features tend to be associated with analogous synesthetic colors. We tested five Korean multilingual synesthetes on a color-matching task using graphemes from Korean, English, and Japanese orthography. We then compared the similarity of synesthetic colors induced by those characters sharing a phonetic feature. Results showed that graphemes associated with the same phonetic feature tend to induce synesthetic color in both within- and cross-script analyses. Moreover, this tendency was consistent for graphemes that are not transliterable into each other as well as graphemes that are. These results suggest that it is the perceptual-i.e., phonetic-properties associated with graphemes, not just conceptual associations such as transliteration, that determine synesthetic color.
ERIC Educational Resources Information Center
Marini, Anthony E.
1990-01-01
The verbal encoding ability of 24 students (ages 14-20) with learning disabilities (LD) was compared to that of 24 non-learning-disabled subjects. LD subjects did not show a release from proactive interference, suggesting that such students are less likely to encode the phonetic features of words or use a phonetic code in short-term memory.…
Perception and the temporal properties of speech
NASA Astrophysics Data System (ADS)
Gordon, Peter C.
1991-11-01
Four experiments addressing the role of attention in phonetic perception are reported. The first experiment shows that the relative importance of two cues to the voicing distinction changes when subjects must perform an arithmetic distractor task at the same time as identifying a speech stimulus. The voice onset time cue loses phonetic significance when subjects are distracted, while the F0 onset frequency cue does not. The second experiment shows a similar pattern for two cues to the distinction between the vowels /i/ (as in 'beat') and /I/ (as in 'bit'). Together these experiments indicate that careful attention to speech perception is necessary for strong acoustic cues to achieve their full phonetic impact, while weaker acoustic cues achieve their full phonetic impact without close attention. Experiment 3 shows that this pattern is obtained when the distractor task places little demand on verbal short term memory. Experiment 4 provides a large data set for testing formal models of the role of attention in speech perception. Attention is shown to influence the signal to noise ratio in phonetic encoding. This principle is instantiated in a network model in which the role of attention is to reduce noise in the phonetic encoding of acoustic cues. Implications of this work for understanding speech perception and general theories of the role of attention in perception are discussed.
Díaz, Begoña; Baus, Cristina; Escera, Carles; Costa, Albert; Sebastián-Gallés, Núria
2008-01-01
Human beings differ in their ability to master the sounds of their second language (L2). Phonetic training studies have proposed that differences in phonetic learning stem from differences in psychoacoustic abilities rather than speech-specific capabilities. We aimed at finding the origin of individual differences in L2 phonetic acquisition in natural learning contexts. We consider two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. For this purpose, event-related potentials (ERPs) were recorded from two groups of early, proficient Spanish-Catalan bilinguals who differed in their mastery of the Catalan (L2) phonetic contrast /e-ε/. Brain activity in response to acoustic change detection was recorded in three different conditions involving tones of different length (duration condition), frequency (frequency condition), and presentation order (pattern condition). In addition, neural correlates of speech change detection were also assessed for both native (/o/-/e/) and nonnative (/o/-/ö/) phonetic contrasts (speech condition). Participants' discrimination accuracy, reflected electrically as a mismatch negativity (MMN), was similar between the two groups of participants in the three acoustic conditions. Conversely, the MMN was reduced in poor perceivers (PP) when they were presented with speech sounds. Therefore, our results support a speech-specific origin of individual variability in L2 phonetic mastery. PMID:18852470
Titterington, Jill; Bates, Sally
2018-01-01
Accuracy of phonetic transcription is a core skill for speech and language therapists (SLTs) worldwide (Howard & Heselwood, 2002). The current study investigates the value of weekly independent online phonetic transcription tasks to support development of this skill in year one SLT students. Using a mixed methods observational design, students enrolled in a year one phonetics module completed 10 weekly homework activities in phonetic transcription on a stand-alone tutorial site (WebFon (Bates, Matthews & Eagles, 2010)) and 5 weekly online quizzes (the 'Ulster Set' (Titterington, unpublished)). Student engagement with WebFon was measured in terms of the number of responses made to 'sparks' on the University's Virtual Learning Environment Discussion Board. Measures of phonetic transcription accuracy were obtained for the 'Ulster Set' and for a stand-alone piece of coursework at the end of the module. Qualitative feedback about experience with the online learning was gathered via questionnaire. A positive significant association was found between student engagement with WebFon and performance in the 'Ulster Set', and between performance in the 'Ulster Set' and final coursework. Students valued both online independent learning resources as each supported different learning needs. However, student compliance with WebFon was significantly lower than with the 'Ulster Set'. Motivators and inhibitors to engagement with the online resources were investigated identifying what best maximised engagement. These results indicate that while 'independent' online learning can support development of phonetic transcription skills, the activities must be carefully managed and constructively aligned to assessment providing the level of valance necessary to ensure effective engagement.
Phonetic Encoding of Coda Voicing Contrast under Different Focus Conditions in L1 vs. L2 English.
Choi, Jiyoun; Kim, Sahayng; Cho, Taehong
2016-01-01
This study investigated how coda voicing contrast in English would be phonetically encoded in the temporal vs. spectral dimension of the preceding vowel (in vowel duration vs. F1/F2) by Korean L2 speakers of English, and how their L2 phonetic encoding pattern would be compared to that of native English speakers. Crucially, these questions were explored by taking into account the phonetics-prosody interface, testing effects of prominence by comparing target segments in three focus conditions (phonological focus, lexical focus, and no focus). Results showed that Korean speakers utilized the temporal dimension (vowel duration) to encode coda voicing contrast, but failed to use the spectral dimension (F1/F2), reflecting their native language experience-i.e., with a more sparsely populated vowel space in Korean, they are less sensitive to small changes in the spectral dimension, and hence fine-grained spectral cues in English are not readily accessible. Results also showed that along the temporal dimension, both the L1 and L2 speakers hyperarticulated coda voicing contrast under prominence (when phonologically or lexically focused), but hypoarticulated it in the non-prominent condition. This indicates that low-level phonetic realization and high-order information structure interact in a communicatively efficient way, regardless of the speakers' native language background. The Korean speakers, however, used the temporal phonetic space differently from the way the native speakers did, especially showing less reduction in the no focus condition. This was also attributable to their native language experience-i.e., the Korean speakers' use of temporal dimension is constrained in a way that is not detrimental to the preservation of coda voicing contrast, given that they failed to add additional cues along the spectral dimension. The results imply that the L2 phonetic system can be more fully illuminated through an investigation of the phonetics-prosody interface in connection with the L2 speakers' native language experience.
Phonetic Encoding of Coda Voicing Contrast under Different Focus Conditions in L1 vs. L2 English
Choi, Jiyoun; Kim, Sahayng; Cho, Taehong
2016-01-01
This study investigated how coda voicing contrast in English would be phonetically encoded in the temporal vs. spectral dimension of the preceding vowel (in vowel duration vs. F1/F2) by Korean L2 speakers of English, and how their L2 phonetic encoding pattern would be compared to that of native English speakers. Crucially, these questions were explored by taking into account the phonetics-prosody interface, testing effects of prominence by comparing target segments in three focus conditions (phonological focus, lexical focus, and no focus). Results showed that Korean speakers utilized the temporal dimension (vowel duration) to encode coda voicing contrast, but failed to use the spectral dimension (F1/F2), reflecting their native language experience—i.e., with a more sparsely populated vowel space in Korean, they are less sensitive to small changes in the spectral dimension, and hence fine-grained spectral cues in English are not readily accessible. Results also showed that along the temporal dimension, both the L1 and L2 speakers hyperarticulated coda voicing contrast under prominence (when phonologically or lexically focused), but hypoarticulated it in the non-prominent condition. This indicates that low-level phonetic realization and high-order information structure interact in a communicatively efficient way, regardless of the speakers’ native language background. The Korean speakers, however, used the temporal phonetic space differently from the way the native speakers did, especially showing less reduction in the no focus condition. This was also attributable to their native language experience—i.e., the Korean speakers’ use of temporal dimension is constrained in a way that is not detrimental to the preservation of coda voicing contrast, given that they failed to add additional cues along the spectral dimension. The results imply that the L2 phonetic system can be more fully illuminated through an investigation of the phonetics-prosody interface in connection with the L2 speakers’ native language experience. PMID:27242571
A model for ethical practices in clinical phonetics and linguistics.
Powell, Thomas W
2007-01-01
The emergence of clinical phonetics and linguistics as an area of scientific inquiry gives rise to the need for guidelines that define ethical and responsible conduct. The diverse membership of the International Clinical Phonetics and Linguistics Association (ICPLA) and the readership of this journal are uniquely suited to consider ethical issues from diverse perspectives. Accordingly, this paper introduces a multi-tiered six-factor model for ethical practices to stimulate discussion of ethical issues.
ERIC Educational Resources Information Center
Champagne-Muzar, Cecile
1996-01-01
Ascertains the influence of the development of receptive phonetic skills on the level of listening comprehension of adults learning French as a second language in a formal setting. Test results indicate substantial gains in phonetics by the experimental group and a significant difference between the performance of experimental and control groups.…
Syllable-constituent perception by hearing-aid users: Common factors in quiet and noise
Miller, James D.; Watson, Charles S.; Leek, Marjorie R.; Dubno, Judy R.; Wark, David J.; Souza, Pamela E.; Gordon-Salant, Sandra; Ahlstrom, Jayne B.
2017-01-01
The abilities of 59 adult hearing-aid users to hear phonetic details were assessed by measuring their abilities to identify syllable constituents in quiet and in differing levels of noise (12-talker babble) while wearing their aids. The set of sounds consisted of 109 frequently occurring syllable constituents (45 onsets, 28 nuclei, and 36 codas) spoken in varied phonetic contexts by eight talkers. In nominal quiet, a speech-to-noise ratio (SNR) of 40 dB, scores of individual listeners ranged from about 23% to 85% correct. Averaged over the range of SNRs commonly encountered in noisy situations, scores of individual listeners ranged from about 10% to 71% correct. The scores in quiet and in noise were very strongly correlated, R = 0.96. This high correlation implies that common factors play primary roles in the perception of phonetic details in quiet and in noise. Otherwise said, hearing-aid users' problems perceiving phonetic details in noise appear to be tied to their problems perceiving phonetic details in quiet and vice versa. PMID:28464618
Phonetic convergence in spontaneous conversations as a function of interlocutor language distance
Kim, Midam; Horton, William S.; Bradlow, Ann R.
2013-01-01
This study explores phonetic convergence during conversations between pairs of talkers with varying language distance. Specifically, we examined conversations within two native English talkers and within two native Korean talkers who had either the same or different regional dialects, and between native and nonnative talkers of English. To measure phonetic convergence, an independent group of listeners judged the similarity of utterance samples from each talker through an XAB perception test, in which X was a sample of one talker’s speech and A and B were samples from the other talker at either early or late portions of the conversation. The results showed greater convergence for same-dialect pairs than for either the different-dialect pairs or the different-L1 pairs. These results generally support the hypothesis that there is a relationship between phonetic convergence and interlocutor language distance. We interpret this pattern as suggesting that phonetic convergence between talker pairs that vary in the degree of their initial language alignment may be dynamically mediated by two parallel mechanisms: the need for intelligibility and the extra demands of nonnative speech production and perception. PMID:23637712
Can mergers-in-progress be unmerged in speech accommodation?
Babel, Molly; McAuliffe, Michael; Haber, Graham
2013-01-01
This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged.
Can mergers-in-progress be unmerged in speech accommodation?
Babel, Molly; McAuliffe, Michael; Haber, Graham
2013-01-01
This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged. PMID:24069011
Lebedeva, Gina C.; Kuhl, Patricia K.
2010-01-01
To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (Experiment 1), but did not detect the identical pitch change with variegated syllables (Experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (Experiment 2) than the identical syllable change in a spoken sequence (Experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy. PMID:20472295
Holliday, Jeffrey J; Turnbull, Rory; Eychenne, Julien
2017-10-01
This article presents K-SPAN (Korean Surface Phonetics and Neighborhoods), a database of surface phonetic forms and several measures of phonological neighborhood density for 63,836 Korean words. Currently publicly available Korean corpora are limited by the fact that they only provide orthographic representations in Hangeul, which is problematic since phonetic forms in Korean cannot be reliably predicted from orthographic forms. We describe the method used to derive the surface phonetic forms from a publicly available orthographic corpus of Korean, and report on several statistics calculated using this database; namely, segment unigram frequencies, which are compared to previously reported results, along with segment-based and syllable-based neighborhood density statistics for three types of representation: an "orthographic" form, which is a quasi-phonological representation, a "conservative" form, which maintains all known contrasts, and a "modern" form, which represents the pronunciation of contemporary Seoul Korean. These representations are rendered in an ASCII-encoded scheme, which allows users to query the corpus without having to read Korean orthography, and permits the calculation of a wide range of phonological measures.
Phonetics, Phonology, and Applied Linguistics.
ERIC Educational Resources Information Center
Nadasdy, Adam
1995-01-01
Examines recent trends in phonetics and phonology and their influence on second language instruction, specifically grammar and lexicography. An annotated bibliography discusses nine important works in the field. (99 references) (MDM)
Articulatory mediation of speech perception: a causal analysis of multi-modal imaging data.
Gow, David W; Segawa, Jennifer A
2009-02-01
The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analyzes of high spatiotemporal resolution neural activation data derived from the integration of magnetic resonance imaging, magnetoencephalography and electroencephalography, to examine the role of lexical and articulatory mediation in listeners' ability to use phonetic context to compensate for place assimilation. Listeners heard two-word phrases such as pen pad and then saw two pictures, from which they had to select the one that depicted the phrase. Assimilation, lexical competitor environment and the phonological validity of assimilation context were all manipulated. Behavioral data showed an effect of context on the interpretation of assimilated segments. Analysis of 40 Hz gamma phase locking patterns identified a large distributed neural network including 16 distinct regions of interest (ROIs) spanning portions of both hemispheres in the first 200 ms of post-assimilation context. Granger analyzes of individual conditions showed differing patterns of causal interaction between ROIs during this interval, with hypothesized lexical and articulatory structures and pathways driving phonetic activation in the posterior superior temporal gyrus in assimilation conditions, but not in phonetically unambiguous conditions. These results lend strong support for the motor theory of speech perception, and clarify the role of lexical mediation in the phonetic processing of assimilated speech.
Rapid recalibration of speech perception after experiencing the McGurk illusion.
Lüttke, Claudia S; Pérez-Bellido, Alexis; de Lange, Floris P
2018-03-01
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada'). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as 'ada'. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to 'ada' (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization.
The Phonetics of Head and Body Movement in the Realization of American Sign Language Signs.
Tyrone, Martha E; Mauk, Claude E
2016-01-01
Because the primary articulators for sign languages are the hands, sign phonology and phonetics have focused mainly on them and treated other articulators as passive targets. However, there is abundant research on the role of nonmanual articulators in sign language grammar and prosody. The current study examines how hand and head/body movements are coordinated to realize phonetic targets. Kinematic data were collected from 5 deaf American Sign Language (ASL) signers to allow the analysis of movements of the hands, head and body during signing. In particular, we examine how the chin, forehead and torso move during the production of ASL signs at those three phonological locations. Our findings suggest that for signs with a lexical movement toward the head, the forehead and chin move to facilitate convergence with the hand. By comparison, the torso does not move to facilitate convergence with the hand for signs located at the torso. These results imply that the nonmanual articulators serve a phonetic as well as a grammatical or prosodic role in sign languages. Future models of sign phonetics and phonology should take into consideration the movements of the nonmanual articulators in the realization of signs. © 2016 S. Karger AG, Basel.
Investigation of Chinese text entry performance for mobile display interfaces.
Lin, Po-Hung
2015-01-01
This study examined the effects of panel type, frequency of use and arrangement of phonetic symbols on operation time, usability, visual fatigue and workload in text entry performance. Three types of panel (solid, touch and mixed), three types of frequency of use (low, medium and high) and two types of the arrangement of phonetic symbols (vertical and horizontal) were investigated through 30 college students in the experiment. The results indicated that panel type, frequency of use, arrangement of phonetic symbols and the interaction between panel type and frequency of use were significant factors on operation time. Panel type was also a significant factor on usability, and a touch panel and a solid panel showed better usability than a mixed panel. Furthermore, a touch panel showed good usability and the lowest workload and therefore it is recommended to use a touch panel with vertical phonetic arrangement in sending Chinese text messages. Practitioner Summary: This study found, from ergonomics considerations, that a touch panel showed good usability and it is recommended to use a touch panel with vertical phonetic arrangement in sending Chinese text messages. Mobile display manufacturers can use the results of this study as a reference for future keyboard design.
Phonological Feature Re-Assembly and the Importance of Phonetic Cues
ERIC Educational Resources Information Center
Archibald, John
2009-01-01
It is argued that new phonological features can be acquired in second languages, but that both feature acquisition and feature re-assembly are affected by the robustness of phonetic cues in the input.
Phonetic and phonological imitation of intonation in two varieties of Italian
D’Imperio, Mariapaola; Cavone, Rossana; Petrone, Caterina
2014-01-01
The aim of this study was to test whether both phonetic and phonological representations of intonation can be rapidly modified when imitating utterances belonging to a different regional variety of the same language. Our main hypothesis was that tonal alignment, just as other phonetic features of speech, would be rapidly modified by Italian speakers when imitating pitch accents of a different (Southern) variety of Italian. In particular, we tested whether Bari Italian (BI) speakers would produce later peaks for their native rising L + H* (question pitch accent) in the process of imitating Neapolitan Italian (NI) rising L* + H accents. Also, we tested whether BI speakers are able to modify other phonetic properties (pitch level) as well as phonological characteristics (changes in tonal composition) of the same contour. In a follow-up study, we tested if the reverse was also true, i.e., whether NI speakers would produce earlier peaks within the L* + H accent in the process of imitating the L + H* of BI questions, despite the presence of a contrast between two rising accents in this variety. Our results show that phonetic detail of tonal alignment can be successfully modified by both BI and NI speakers when imitating a model speaker of the other variety. The hypothesis of a selective imitation process preventing alignment modifications in NI was hence not supported. Moreover the effect was significantly stronger for low frequency words. Participants were also able to imitate other phonetic cues, in that they modified global utterance pitch level. Concerning phonological convergence, speakers modified the tonal specification of the edge tones in order to resemble that of the other variety by either suppressing or increasing the presence of a final H%. Hence, our data show that intonation imitation leads to fast modification of both phonetic and phonological intonation representations including detail of tonal alignment and pitch scaling. PMID:25408676
Lexical exposure to native language dialects can improve non-native phonetic discrimination.
Olmstead, Annie J; Viswanathan, Navin
2018-04-01
Nonnative phonetic learning is an area of great interest for language researchers, learners, and educators alike. In two studies, we examined whether nonnative phonetic discrimination of Hindi dental and retroflex stops can be improved by exposure to lexical items bearing the critical nonnative stops. We extend the lexical retuning paradigm of Norris, McQueen, and Cutler (Cognitive Psychology, 47, 204-238, 2003) by having naive American English (AE)-speaking participants perform a pretest-training-posttest procedure. They performed an AXB discrimination task with the Hindi retroflex and dental stops before and after transcribing naturally produced words from an Indian English speaker that either contained these tokens or not. Only those participants who heard words with the critical nonnative phones improved in their posttest discrimination. This finding suggests that exposure to nonnative phones in native lexical contexts supports learning of difficult nonnative phonetic discrimination.
NASA Astrophysics Data System (ADS)
Maskeliunas, Rytis; Rudzionis, Vytautas
2011-06-01
In recent years various commercial speech recognizers have become available. These recognizers provide the possibility to develop applications incorporating various speech recognition techniques easily and quickly. All of these commercial recognizers are typically targeted to widely spoken languages having large market potential; however, it may be possible to adapt available commercial recognizers for use in environments where less widely spoken languages are used. Since most commercial recognition engines are closed systems the single avenue for the adaptation is to try set ways for the selection of proper phonetic transcription methods between the two languages. This paper deals with the methods to find the phonetic transcriptions for Lithuanian voice commands to be recognized using English speech engines. The experimental evaluation showed that it is possible to find phonetic transcriptions that will enable the recognition of Lithuanian voice commands with recognition accuracy of over 90%.
Grammatical constraints on phonological encoding in speech production.
Heller, Jordana R; Goldrick, Matthew
2014-12-01
To better understand the influence of grammatical encoding on the retrieval and encoding of phonological word-form information during speech production, we examine how grammatical class constraints influence the activation of phonological neighbors (words phonologically related to the target--e.g., MOON, TWO for target TUNE). Specifically, we compare how neighbors that share a target's grammatical category (here, nouns) influence its planning and retrieval, assessed by picture naming latencies, and phonetic encoding, assessed by word productions in picture names, when grammatical constraints are strong (in sentence contexts) versus weak (bare naming). Within-category (noun) neighbors influenced planning time and phonetic encoding more strongly in sentence contexts. This suggests that grammatical encoding constrains phonological processing; the influence of phonological neighbors is grammatically dependent. Moreover, effects on planning times could not fully account for phonetic effects, suggesting that phonological interaction affects articulation after speech onset. These results support production theories integrating grammatical, phonological, and phonetic processes.
Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)
NASA Astrophysics Data System (ADS)
Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto
An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.
Native language governs interpretation of salient speech sound differences at 18 months
Dietrich, Christiane; Swingley, Daniel; Werker, Janet F.
2007-01-01
One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not sufficient for language learning. Children must also discover which of the phonetic cues that are used in their language serve to signal lexical distinctions. Phonetic variation that is readily discriminable to all children may indicate two different words in one language but only one word in another. Here, we provide evidence that the language background of 1.5-year-olds affects their interpretation of phonetic variation in word learning, and we show that young children interpret salient phonetic variation in language-specific ways. Three experiments with a total of 104 children compared Dutch- and English-learning 18-month-olds' responses to novel words varying in vowel duration or vowel quality. Dutch learners interpreted vowel duration as lexically contrastive, but English learners did not, in keeping with properties of Dutch and English. Both groups performed equivalently when differentiating words varying in vowel quality. Thus, at one and a half years, children's phonological knowledge already guides their interpretation of salient phonetic variation. We argue that early phonological learning is not just a matter of maintaining the ability to distinguish language-relevant phonetic cues. Learning also requires phonological interpretation at appropriate levels of linguistic analysis. PMID:17911262
Contributions of speech science to the technology of man-machine voice interactions
NASA Technical Reports Server (NTRS)
Lea, Wayne A.
1977-01-01
Research in speech understanding was reviewed. Plans which include prosodics research, phonological rules for speech understanding systems, and continued interdisciplinary phonetics research are discussed. Improved acoustic phonetic analysis capabilities in speech recognizers are suggested.
Phonetic complexity and stuttering in Spanish
Howell, Peter; Au-Yeung, James
2007-01-01
The current study investigated whether phonetic complexity affected stuttering rate for Spanish speakers. The speakers were assigned to three age groups (6-11, 12-17 and 18 years plus) that were similar to those used in an earlier study on English. The analysis was performed using Jakielski's (1998) Index of Phonetic Complexity (IPC) scheme in which each word is given an IPC score based on the number of complex attributes it includes for each of eight factors. Stuttering on function words for Spanish did not correlate with IPC score for any age group. This mirrors the finding for English that stuttering on these words is not affected by phonetic complexity. The IPC scores of content words correlated positively with stuttering rate for 6-11 year old and adult speakers. Comparison was made between the languages to establish whether or not experience with the factors determines the problem they pose for speakers (revealed by differences in stuttering rate). Evidence was obtained that four factors found to be important determinants of stuttering on content words in English for speakers aged 12 and above, also affected Spanish speakers. This occurred despite large differences in frequency of usage of these factors. It is concluded that phonetic factors affect stuttering rate irrespective of a speaker's experience with that factor. PMID:17364620
Phonetic complexity and stuttering in Spanish.
Howell, Peter; Au-Yeung, James
2007-02-01
The current study investigated whether phonetic complexity affected stuttering rate for Spanish speakers. The speakers were assigned to three age groups (6-11, 12-17 and 18-years plus) that were similar to those used in an earlier study on English. The analysis was performed using Jakielski's Index of Phonetic Complexity (IPC) scheme in which each word is given an IPC score based on the number of complex attributes it includes for each of eight factors. Stuttering on function words for Spanish did not correlate with IPC score for any age group. This mirrors the finding for English that stuttering on these words is not affected by phonetic complexity. The IPC scores of content words correlated positively with stuttering rate for 6-11-year-old and adult speakers. Comparison was made between the languages to establish whether or not experience with the factors determines the problem they pose for speakers (revealed by differences in stuttering rate). Evidence was obtained that four factors found to be important determinants of stuttering on content words in English for speakers aged 12 and above, also affected Spanish speakers. This occurred despite large differences in frequency of usage of these factors. It is concluded that phonetic factors affect stuttering rate irrespective of a speaker's experience with that factor.
Rapid recalibration of speech perception after experiencing the McGurk illusion
Pérez-Bellido, Alexis; de Lange, Floris P.
2018-01-01
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as ‘ada’). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as ‘ada’. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to ‘ada’ (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization. PMID:29657743
Enzinger, Ewald; Morrison, Geoffrey Stewart
2017-08-01
In a 2012 case in New South Wales, Australia, the identity of a speaker on several audio recordings was in question. Forensic voice comparison testimony was presented based on an auditory-acoustic-phonetic-spectrographic analysis. No empirical demonstration of the validity and reliability of the analytical methodology was presented. Unlike the admissibility standards in some other jurisdictions (e.g., US Federal Rule of Evidence 702 and the Daubert criteria, or England & Wales Criminal Practice Directions 19A), Australia's Unified Evidence Acts do not require demonstration of the validity and reliability of analytical methods and their implementation before testimony based upon them is presented in court. The present paper reports on empirical tests of the performance of an acoustic-phonetic-statistical forensic voice comparison system which exploited the same features as were the focus of the auditory-acoustic-phonetic-spectrographic analysis in the case, i.e., second-formant (F2) trajectories in /o/ tokens and mean fundamental frequency (f0). The tests were conducted under conditions similar to those in the case. The performance of the acoustic-phonetic-statistical system was very poor compared to that of an automatic system. Copyright © 2017 Elsevier B.V. All rights reserved.
McArdle, Rachel; Wilson, Richard H
2008-06-01
To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.
The Dynamics of Phonological Planning
ERIC Educational Resources Information Center
Roon, Kevin D.
2013-01-01
This dissertation proposes a dynamical computational model of the timecourse of phonological parameter setting. In the model, phonological representations embrace phonetic detail, with phonetic parameters represented as activation fields that evolve over time and determine the specific parameter settings of a planned utterance. Existing models of…
Gender differences in lateralization of mismatch negativity in dichotic listening tasks.
Ikezawa, Satoru; Nakagome, Kazuyuki; Mimura, Masaru; Shinoda, Junko; Itoh, Kenji; Homma, Ikuo; Kamijima, Kunitoshi
2008-04-01
With the aim of investigating gender differences in the functional lateralization subserving preattentive processing of language stimuli, we compared auditory mismatch negativities (MMNs) using dichotic listening tasks. Forty-four healthy volunteers, including 23 males and 21 females, participated in the study. MMNs generated by pure-tone and phonetic stimuli were compared, to check for the existence of language-specific gender differences in lateralization. Both EEG amplitude and scalp current density (SCD) data were analyzed. With phonetic MMNs, EEG findings revealed significantly larger amplitude in females than males, especially in the right hemisphere, while SCD findings revealed left hemisphere dominance and contralateral dominance in males alone. With pure-tone MMNs, no significant gender differences in hemispheric lateralization appeared in either EEG or SCD findings. While males exhibited left-lateralized activation with phonetic MMNs, females exhibited more bilateral activity. Further, the contralateral dominance of the SCD distribution associated with the ear receiving deviant stimuli in males indicated that ipsilateral input as well as interhemispheric transfer across the corpus callosum to the ipsilateral side was more suppressed in males than in females. The findings of the present study suggest that functional lateralization subserving preattentive detection of phonetic change differs between the genders. These results underscore the significance of considering the gender differences in the study of MMN, especially when phonetic stimulus is adopted. Moreover, they support the view of Voyer and Flight [Voyer, D., Flight, J., 2001. Gender differences in laterality on a dichotic task: the influence of report strategies. Cortex 37, 345-362.] in that the gender difference in hemispheric lateralization of language function is observed in a well-managed-attention condition, which fits the condition adopted in the MMN measurement; subjects are required to focus attention to a distraction task and thereby ignore the phonetic stimuli that elicit MMN.
Linguistic processing in idiopathic generalized epilepsy: an auditory event-related potential study.
Henkin, Yael; Kishon-Rabin, Liat; Pratt, Hillel; Kivity, Sara; Sadeh, Michelle; Gadoth, Natan
2003-09-01
Auditory processing of increasing acoustic and linguistic complexity was assessed in children with idiopathic generalized epilepsy (IGE) by using auditory event-related potentials (AERPs) as well as reaction time and performance accuracy. Twenty-four children with IGE [12 with generalized tonic-clonic seizures (GTCSs), and 12 with absence seizures (ASs)] with average intelligence and age-appropriate scholastic skills, uniformly medicated with valproic acid (VPA), and 20 healthy controls, performed oddball discrimination tasks that consisted of the following stimuli: (a) pure tones; (b) nonmeaningful monosyllables that differed by their phonetic features (i.e., phonetic stimuli); and (c) meaningful monosyllabic words from two semantic categories (i.e., semantic stimuli). AERPs elicited by nonlinguistic stimuli were similar in healthy and epilepsy children, whereas those elicited by linguistic stimuli (i.e., phonetic and semantic) differed significantly in latency, amplitude, and scalp distribution. In children with GTCSs, phonetic and semantic processing were characterized by slower processing time, manifested by prolonged N2 and P3 latencies during phonetic processing, and prolongation of all AERPs latencies during semantic processing. In children with ASs, phonetic and semantic processing were characterized by increased allocation of attentional resources, manifested by enhanced N2 amplitudes. Semantic processing also was characterized by prolonged P3 latency. In both patient groups, processing of linguistic stimuli resulted in different patterns of brain-activity lateralization compared with that in healthy controls. Reaction time and performance accuracy did not differ among the study groups. AERPs exposed linguistic-processing deficits related to seizure type in children with IGE. Neurologic follow-up should therefore include evaluation of linguistic functions, and remedial intervention should be provided, accordingly.
Neural correlates of audiotactile phonetic processing in early-blind readers: an fMRI study.
Pishnamazi, Morteza; Nojaba, Yasaman; Ganjgahi, Habib; Amousoltani, Asie; Oghabian, Mohammad Ali
2016-05-01
Reading is a multisensory function that relies on arbitrary associations between auditory speech sounds and symbols from a second modality. Studies of bimodal phonetic perception have mostly investigated the integration of visual letters and speech sounds. Blind readers perform an analogous task by using tactile Braille letters instead of visual letters. The neural underpinnings of audiotactile phonetic processing have not been studied before. We used functional magnetic resonance imaging to reveal the neural correlates of audiotactile phonetic processing in 16 early-blind Braille readers. Braille letters and corresponding speech sounds were presented in unimodal, and congruent/incongruent bimodal configurations. We also used a behavioral task to measure the speed of blind readers in identifying letters presented via tactile and/or auditory modalities. Reaction times for tactile stimuli were faster. The reaction times for bimodal stimuli were equal to those for the slower auditory-only stimuli. fMRI analyses revealed the convergence of unimodal auditory and unimodal tactile responses in areas of the right precentral gyrus and bilateral crus I of the cerebellum. The left and right planum temporale fulfilled the 'max criterion' for bimodal integration, but activities of these areas were not sensitive to the phonetical congruency between sounds and Braille letters. Nevertheless, congruency effects were found in regions of frontal lobe and cerebellum. Our findings suggest that, unlike sighted readers who are assumed to have amodal phonetic representations, blind readers probably process letters and sounds separately. We discuss that this distinction might be due to mal-development of multisensory neural circuits in early blinds or it might be due to inherent differences between Braille and print reading mechanisms.
Plasticity of white matter connectivity in phonetics experts.
Vandermosten, Maaike; Price, Cathy J; Golestani, Narly
2016-09-01
Phonetics experts are highly trained to analyze and transcribe speech, both with respect to faster changing, phonetic features, and to more slowly changing, prosodic features. Previously we reported that, compared to non-phoneticians, phoneticians had greater local brain volume in bilateral auditory cortices and the left pars opercularis of Broca's area, with training-related differences in the grey-matter volume of the left pars opercularis in the phoneticians group (Golestani et al. 2011). In the present study, we used diffusion MRI to examine white matter microstructure, indexed by fractional anisotropy, in (1) the long segment of arcuate fasciculus (AF_long), which is a well-known language tract that connects Broca's area, including left pars opercularis, to the temporal cortex, and in (2) the fibers arising from the auditory cortices. Most of these auditory fibers belong to three validated language tracts, namely to the AF_long, the posterior segment of the arcuate fasciculus and the middle longitudinal fasciculus. We found training-related differences in phoneticians in left AF_long, as well as group differences relative to non-experts in the auditory fibers (including the auditory fibers belonging to the left AF_long). Taken together, the results of both studies suggest that grey matter structural plasticity arising from phonetic transcription training in Broca's area is accompanied by changes to the white matter fibers connecting this very region to the temporal cortex. Our findings suggest expertise-related changes in white matter fibers connecting fronto-temporal functional hubs that are important for phonetic processing. Further studies can pursue this hypothesis by examining the dynamics of these expertise related grey and white matter changes as they arise during phonetic training.
Phonetic Change in Newfoundland English
ERIC Educational Resources Information Center
Clarke, Sandra
2012-01-01
Newfoundland English has long been considered autonomous within the North American context. Sociolinguistic studies conducted over the past three decades, however, typically suggest cross-generational change in phonetic feature use, motivated by greater alignment with mainland Canadian English norms. The present study uses data spanning the past…
Tres mitos de la fonetica espanola (Three Myths of Spanish Phonetics).
ERIC Educational Resources Information Center
Dalbor, John B.
1980-01-01
Contrasts current pronunciation of some Spanish consonants with the teachings and theory of pronunciation manuals, advocating more realistic standards of instruction. Gives a detailed phonetic description of common variants of the sounds discussed, covering both Spanish and Latin American dialects. (MES)
Bradlow, Ann; Clopper, Cynthia; Smiljanic, Rajka; Walter, Mary Ann
2010-01-01
The goal of the present study was to devise a means of representing languages in a perceptual similarity space based on their overall phonetic similarity. In Experiment 1, native English listeners performed a free classification task in which they grouped 17 diverse languages based on their perceived phonetic similarity. A similarity matrix of the grouping patterns was then submitted to clustering and multidimensional scaling analyses. In Experiment 2, an independent group of native English listeners sorted the group of 17 languages in terms of their distance from English. Experiment 3 repeated Experiment 2 with four groups of non-native English listeners: Dutch, Mandarin, Turkish and Korean listeners. Taken together, the results of these three experiments represent a step towards establishing an approach to assessing the overall phonetic similarity of languages. This approach could potentially provide the basis for developing predictions regarding foreign-accented speech intelligibility for various listener groups, and regarding speech perception accuracy in the context of background noise in various languages. PMID:21179563
Speaker Invariance for Phonetic Information: an fMRI Investigation
Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.
2012-01-01
The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714
Graphemes Sharing Phonetic Features Tend to Induce Similar Synesthetic Colors
Kang, Mi-Jeong; Kim, Yeseul; Shin, Ji-Young; Kim, Chai-Youn
2017-01-01
Individuals with grapheme-color synesthesia experience idiosyncratic colors when viewing achromatic letters or digits. Despite large individual differences in grapheme-color association, synesthetes tend to associate graphemes sharing a perceptual feature with similar synesthetic colors. Sound has been suggested as one such feature. In the present study, we investigated whether graphemes of which representative phonemes have similar phonetic features tend to be associated with analogous synesthetic colors. We tested five Korean multilingual synesthetes on a color-matching task using graphemes from Korean, English, and Japanese orthography. We then compared the similarity of synesthetic colors induced by those characters sharing a phonetic feature. Results showed that graphemes associated with the same phonetic feature tend to induce synesthetic color in both within- and cross-script analyses. Moreover, this tendency was consistent for graphemes that are not transliterable into each other as well as graphemes that are. These results suggest that it is the perceptual—i.e., phonetic—properties associated with graphemes, not just conceptual associations such as transliteration, that determine synesthetic color. PMID:28348537
Recognition of speaker-dependent continuous speech with KEAL
NASA Astrophysics Data System (ADS)
Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.
1989-04-01
A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.
Articulation in schoolchildren and adults with neurofibromatosis type 1.
Cosyns, Marjan; Mortier, Geert; Janssens, Sandra; Bogaert, Famke; D'Hondt, Stephanie; Van Borsel, John
2012-01-01
Several authors mentioned the occurrence of articulation problems in the neurofibromatosis type 1 (NF1) population. However, few studies have undertaken a detailed analysis of the articulation skills of NF1 patients, especially in schoolchildren and adults. Therefore, the aim of the present study was to examine in depth the articulation skills of NF1 schoolchildren and adults, both phonetically and phonologically. Speech samples were collected from 43 Flemish NF1 patients (14 children and 29 adults), ranging in age between 7 and 53 years, using a standardized speech test in which all Flemish single speech sounds and most clusters occur in all their permissible syllable positions. Analyses concentrated on consonants only and included a phonetic inventory, a phonetic, and a phonological analysis. It was shown that phonetic inventories were incomplete in 16.28% (7/43) of participants, in which totally correct realizations of the sibilants /ʃ/ and/or /ʒ/ were missing. Phonetic analysis revealed that distortions were the predominant phonetic error type. Sigmatismus stridens, multiple ad- or interdentality, and, in children, rhotacismus non vibrans were frequently observed. From a phonological perspective, the most common error types were substitution and syllable structure errors. Particularly, devoicing, cluster simplification, and, in children, deletion of the final consonant of words were perceived. Further, it was demonstrated that significantly more men than women presented with an incomplete phonetic inventory, and that girls tended to display more articulation errors than boys. Additionally, children exhibited significantly more articulation errors than adults, suggesting that although the articulation skills of NF1 patients evolve positively with age, articulation problems do not resolve completely from childhood to adulthood. As such, the articulation errors made by NF1 adults may be regarded as residual articulation disorders. It can be concluded that the speech of NF1 patients is characterized by mild articulation disorders at an age where this is no longer expected. Readers will be able to describe neurofibromatosis type 1 (NF1) and explain the articulation errors displayed by schoolchildren and adults with this genetic syndrome. © 2011 Elsevier Inc. All rights reserved.
Phonetic Ability in Stuttering
ERIC Educational Resources Information Center
Wingate, M. E.
1971-01-01
On two tests of phonetic manipulation 25 male stutterers were found to be inferior to matched controls. Results are reported to be consistent with previous findings of author and to interrelate with earlier research suggesting that some inadequacy in sound-making skills is an important aspect of stuttering. (Author/KW)
Phonetic Processing When Learning Words
ERIC Educational Resources Information Center
Havy, Mélanie; Bouchon, Camillia; Nazzi, Thierry
2016-01-01
Infants have remarkable abilities to learn several languages. However, phonological acquisition in bilingual infants appears to vary depending on the phonetic similarities or differences of their two native languages. Many studies suggest that learning contrasts with different realizations in the two languages (e.g., the /p/, /t/, /k/ stops have…
Revis, J; Galant, C; Fredouille, C; Ghio, A; Giovanni, A
2012-01-01
Widely studied in terms of perception, acoustics or aerodynamics, dysphonia stays nevertheless a speech phenomenon, closely related to the phonetic composition of the message conveyed by the voice. In this paper, we present a series of three works with the aim to understand the implications of the phonetic manifestation of dysphonia. Our first study proposes a new approach to the perceptual analysis of dysphonia (the phonetic labeling), which principle is to listen and evaluate each phoneme in a sentence separately. This study confirms the hypothesis of Laver that the dysphonia is not a constant noise added to the speech signal, but a discontinuous phenomenon, occurring on certain phonemes, based on the phonetic context. However, the burden of executing the task has led us to look to the techniques of automatic speaker recognition (ASR) to automate the procedure. With the collaboration of the LIA, we have developed a system for automatic classification of dysphonia from the techniques of ASR. This is the subject of our second study. The first results obtained with this system suggest that the unvoiced consonants show predominant performance in the task of automatic classification of dysphonia. This result is surprising since it is often assumed that dysphonia occurs only on laryngeal vibration. We started looking for explanations of this phenomenon and we present our assumptions and experiences in the third work we present.
English speech acquisition in 3- to 5-year-old children learning Russian and English.
Gildersleeve-Neumann, Christina E; Wright, Kira L
2010-10-01
English speech acquisition in Russian-English (RE) bilingual children was investigated, exploring the effects of Russian phonetic and phonological properties on English single-word productions. Russian has more complex consonants and clusters and a smaller vowel inventory than English. One hundred thirty-seven single-word samples were phonetically transcribed from 14 RE and 28 English-only (E) children, ages 3;3 (years;months) to 5;7. Language and age differences were compared descriptively for phonetic inventories. Multivariate analyses compared phoneme accuracy and error rates between the two language groups. RE children produced Russian-influenced phones in English, including palatalized consonants and trills, and demonstrated significantly higher rates of trill substitution, final devoicing, and vowel errors than E children, suggesting Russian language effects on English. RE and E children did not differ in their overall production complexity, with similar final consonant deletion and cluster reduction error rates, similar phonetic inventories by age, and similar levels of phonetic complexity. Both older language groups were more accurate than the younger language groups. We observed effects of Russian on English speech acquisition; however, there were similarities between the RE and E children that have not been reported in previous studies of speech acquisition in bilingual children. These findings underscore the importance of knowing the phonological properties of both languages of a bilingual child in assessment.
Auditory-Phonetic Projection and Lexical Structure in the Recognition of Sine-Wave Words
ERIC Educational Resources Information Center
Remez, Robert E.; Dubowski, Kathryn R.; Broder, Robin S.; Davids, Morgana L.; Grossman, Yael S.; Moskalenko, Marina; Pardo, Jennifer S.; Hasbun, Sara Maria
2011-01-01
Speech remains intelligible despite the elimination of canonical acoustic correlates of phonemes from the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection, although signal-independent properties of lexical neighborhoods also affect intelligibility in utterances…
Phonetic Variation and Interactional Contingencies in Simultaneous Responses
ERIC Educational Resources Information Center
Walker, Gareth
2016-01-01
An auspicious but unexplored environment for studying phonetic variation in naturalistic interaction is where two or more participants say the same thing at the same time. Working with a core dataset built from the multimodal Augmented Multi-party Interaction corpus, the principles of conversation analysis were followed to analyze the sequential…
Statistical Knowledge and Learning in Phonology
ERIC Educational Resources Information Center
Dunbar, Ewan Michael
2013-01-01
This dissertation deals with the theory of the phonetic component of grammar in a formal probabilistic inference framework: (1) it has been recognized since the beginning of generative phonology that some language-specific phonetic implementation is actually context-dependent, and thus it can be said that there are gradient "phonetic…
Lexical Access for Phonetic Ambiguities.
ERIC Educational Resources Information Center
Spencer, N. J.; Wollman, Neil
1980-01-01
Reports on research that (1) suggests that phonetically ambiguous pairs (ice cream/I scream) have been used inaccurately to illustrate contextual effects in word segmentation, (2) supports unitary rather than exhaustive processing, and (3) supports the use of the concepts of word frequency and listener expectations instead of top-down, multiple…
Phonetic Detail in the Developing Lexicon
ERIC Educational Resources Information Center
Swingley, Daniel
2003-01-01
Although infants show remarkable sensitivity to linguistically relevant phonetic variation in speech, young children sometimes appear not to make use of this sensitivity. Here, children' s knowledge of the sound-forms of familiar words was assessed using a visual fixation task. Dutch 19-month-olds were shown pairs of pictures and heard correct…
Phonetics and Phonology. Occasional Papers, No. 16.
ERIC Educational Resources Information Center
Essex Univ., Colchester (England). Dept. of Language and Linguistics.
This volume is devoted to phonetics and phonology. It consists of the following papers: (1) "Generative Phonology, Dependency Phonology and Southern French," by J. Durand, which discusses aspects of a regional pronunciation of French, the status of syllables in generative phonology, and concepts of dependency phonology; (2) "On the…
Negotiating towards a Next Turn: Phonetic Resources for "Doing the Same"
ERIC Educational Resources Information Center
Sikveland, Rein Ove
2012-01-01
This paper investigates hearers' use of response tokens (back-channels), in maintaining and differentiating their actions. Initial observations suggest that hearers produce a sequence of phonetically similar responses to disengage from the current topic, and dissimilar responses to engage with the current topic. This is studied systematically by…
The Impact of Otitis Media with Effusion on Infant Phonetic Perception
ERIC Educational Resources Information Center
Polka, Linda; Rvachew, Susan
2005-01-01
The effect of prior otitis media with effusion (OME) or current middle ear effusion (MEE) on phonetic perception was examined by testing infants' discrimination of "boo" and "goo" syllables in 2 test sessions. Middle ear function was assessed following each perception test using tympanometry. Perceptual performance was compared…
Fun Games and Activities for Pronunciation and Phonetics Classes at Universities.
ERIC Educational Resources Information Center
Makarova, Veronica
Class activities and games designed to stimulate student interest and provide feedback in English-as-a-Second-Language (ESOL) pronunciation and phonetics are described. They are intended to address specific challenges of a typical Japanese, ESOL classroom--low student motivation and inadequate feedback--and to supplement conventional language…
Effects of Phonetic Context on Relative Fundamental Frequency
ERIC Educational Resources Information Center
Lien, Yu-An S.; Gattuccio, Caitlin I.; Stepp, Cara E.
2014-01-01
Purpose: The effect of phonetic context on relative fundamental frequency (RFF) was examined, in order to develop stimuli sets with minimal within-speaker variability that can be implemented in future clinical protocols. Method: Sixteen speakers with healthy voices produced RFF stimuli. Uniform utterances consisted of 3 repetitions of the same…
Developing a Weighted Measure of Speech Sound Accuracy
ERIC Educational Resources Information Center
Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.
2011-01-01
Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…
The Psychological Reality of Phonetic Features in Children.
ERIC Educational Resources Information Center
Zagar, Linda L.; Locke, John L.
1986-01-01
Ten linguistically normal four to five year olds were trained to associate consonantal features of voicing, manner, and place of articulation with cups of particular location and color. It was concluded that phonetic features are of limited availability to children in associative tasks and that their clinical value with phonologically disordered…
Neural and Behavioral Mechanisms of Clear Speech
ERIC Educational Resources Information Center
Luque, Jenna Silver
2017-01-01
Clear speech is a speaking style that has been shown to improve intelligibility in adverse listening conditions, for various listener and talker populations. Clear-speech phonetic enhancements include a slowed speech rate, expanded vowel space, and expanded pitch range. Although clear-speech phonetic enhancements have been demonstrated across a…
Statistical Learning of Phonetic Categories: Insights from a Computational Approach
ERIC Educational Resources Information Center
McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.
2009-01-01
Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…
Phonetic Diversity, Statistical Learning, and Acquisition of Phonology
ERIC Educational Resources Information Center
Pierrehumbert, Janet B.
2003-01-01
In learning to perceive and produce speech, children master complex language-specific patterns. Daunting language-specific variation is found both in the segmental domain and in the domain of prosody and intonation. This article reviews the challenges posed by results in phonetic typology and sociolinguistics for the theory of language…
Practising English Phonetic Symbols in a Communicative Way.
ERIC Educational Resources Information Center
Chu, Wai Ling
Classroom exercises designed to help students learn phonetic symbols more effectively are described. The exercises were developed for use in a Hong Kong school. The idea behind their creation was that use of the symbols in communicative situations would emphasize their utility for learning English pronunciation. Each exercise uses contextualized…
Phonological Encoding and Phonetic Duration
ERIC Educational Resources Information Center
Fricke, Melinda Denise
2013-01-01
Studies of connected speech have repeatedly shown that the contextual predictability of a word is related to its phonetic duration; more predictable words tend to be produced with shorter duration, when other factors are controlled for (Aylett & Turk, 2004, 2006; Bell et al., 2003; Bell, Brenier, Gregory, Girand, & Jurafsky, 2009; Gahl,…
Instrumental and perceptual phonetic analyses: the case for two-tier transcriptions.
Howard, Sara; Heselwood, Barry
2011-11-01
In this article, we discuss the relationship between instrumental and perceptual phonetic analyses. Using data drawn from typical and atypical speech production, we argue that the use of two-tier transcriptions, which can compare and contrast perceptual and instrumental information, is valuable both for our general understanding of the mechanisms of speech production and perception and also for assessment and intervention for individuals with atypical speech production. The central tenet of our case is that instrumental and perceptual analyses are not in competition to give a single 'correct' account of speech data. They take instead perspectives on complementary phonetic domains, which interlock in the speech chain to encompass production, transmission and perception.
Segmenting words from natural speech: subsegmental variation in segmental cues.
Rytting, C Anton; Brew, Chris; Fosler-Lussier, Eric
2010-06-01
Most computational models of word segmentation are trained and tested on transcripts of speech, rather than the speech itself, and assume that speech is converted into a sequence of symbols prior to word segmentation. We present a way of representing speech corpora that avoids this assumption, and preserves acoustic variation present in speech. We use this new representation to re-evaluate a key computational model of word segmentation. One finding is that high levels of phonetic variability degrade the model's performance. While robustness to phonetic variability may be intrinsically valuable, this finding needs to be complemented by parallel studies of the actual abilities of children to segment phonetically variable speech.
Lorma: A Reference Handbook of Phonetics, Grammar, Lexicon and Learning Procedures.
ERIC Educational Resources Information Center
Dwyer, David James
The grammar, phonetics, and lexicon of Lorma, one of the Mande languages of Liberia, are described for the use of Peace Corps volunteers learning the language without teacher assistance. This handbook includes an introduction to the languages of Liberia, procedures for learning a language without assistance, guidelines for tutors, a reference…
Phonetic Recalibration Only Occurs in Speech Mode
ERIC Educational Resources Information Center
Vroomen, Jean; Baart, Martijn
2009-01-01
Upon hearing an ambiguous speech sound dubbed onto lipread speech, listeners adjust their phonetic categories in accordance with the lipread information (recalibration) that tells what the phoneme should be. Here we used sine wave speech (SWS) to show that this tuning effect occurs if the SWS sounds are perceived as speech, but not if the sounds…
Adults Show Less Sensitivity to Phonetic Detail in Unfamiliar Words, Too
ERIC Educational Resources Information Center
White, Katherine S.; Yee, Eiling; Blumstein, Sheila E.; Morgan, James L.
2013-01-01
Young word learners fail to discriminate phonetic contrasts in certain situations, an observation that has been used to support arguments that the nature of lexical representation and lexical processing changes over development. An alternative possibility, however, is that these failures arise naturally as a result of how word familiarity affects…
Semantic vs. Phonetic Decoding Strategies in Non-Native Readers of Chinese
ERIC Educational Resources Information Center
Williams, Clay H.
2010-01-01
This dissertation examines the effects of semantic and phonetic radicals on Chinese character decoding by high-intermediate level Chinese as a Foreign Language (CFL) learners. The results of the main study (discussed in Chapter #5) suggest that the CFL learners tested have a well-developed semantic pathway to recognition; however, their…
ERIC Educational Resources Information Center
Cavaliere, Roberto
1988-01-01
Discusses a study of the expressive qualities of oral language. Results suggest that there is a natural rather than an arbitrary relationship between words and their meanings. Practical applications of this theory of phonetic symbolism in the area of commercial advertising are presented. (CFM)
Infant-Directed Speech Supports Phonetic Category Learning in English and Japanese
ERIC Educational Resources Information Center
Werker, Janet F.; Pons, Ferran; Dietrich, Christiane; Kajikawa, Sachiyo; Fais, Laurel; Amano, Shigeaki
2007-01-01
Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language [Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. "Infant Behaviour and Development," 7, 49-63]. In an artificial language learning…
Phonology Shaped by Phonetics: The Case of Intervocalic Lenition
ERIC Educational Resources Information Center
Kaplan, Abby
2010-01-01
The goal of this dissertation is to explore the phonetic bases of intervocalic lenition--specifically, voicing and spirantization of intervocalic stops. A traditional understanding of phonological patterns like these is that they involve articulatory effort reduction, in that speakers substitute an easy sound for a hard one. Experiment 1 uses a…
Phonetic Pause Unites Phonology and Semantics against Morphology and Syntax
ERIC Educational Resources Information Center
Sakarna, Ahmad Khalaf; Mobaideen, Adnan
2012-01-01
The present study investigates the phonological effect triggered by the different types of phonetic pause used in Quran on morphology, syntax, and semantics. It argues that Quranic pause provides interesting evidence about the close relation between phonology and semantics, from one side, and semantics, morphology, and syntax, from the other…
Manual of Articulatory Phonetics: Teacher's Guide.
ERIC Educational Resources Information Center
Smalley, William A.
This teaching guide is closely integrated with the "Manual of Articulatory Phonetics" (FL 002 882) and its "Workbook Supplement" (FL 002 881). The guide is based on lesson plans which have been developed by the staff using the manual during its developmental period. An introduction on using the lesson plans and teaching techniques is provided.…
Phonetic Variability in Residual Speech Sound Disorders: Exploration of Subtypes
ERIC Educational Resources Information Center
Preston, Jonathan L.; Koenig, Laura L.
2011-01-01
Purpose: To explore whether subgroups of children with residual speech sound disorders (R-SSDs) can be identified through multiple measures of token-to-token phonetic variability (changes in one spoken production to the next). Method: Children with R-SSDs were recorded during a rapid multisyllabic picture naming task and an oral diadochokinetic…
ERIC Educational Resources Information Center
Maire, Jean-Francois
1975-01-01
Gives the justification, text, and examples of two recordings illustrating the phonetic oppositions of four French vowels. The tapes were produced and tested at the Universite de Lausanne, and are intended for language laboratory use. (Text is in French.) (MSE)
ERIC Educational Resources Information Center
Sarmiento Padilla, Jose
1974-01-01
Describes experiments in the field of phonetic correction. Several techniques used at the University of Mons for teaching Spanish pronunciation to French-speaking Belgians are explained. (Text is in French.) (PMP)
ERIC Educational Resources Information Center
McDougall, Patricia; Borowsky, Ron; MacKinnon, G. E.; Hymel, Shelley
2005-01-01
Recent research on developmental dyslexia has suggested a phonological core deficit hypothesis (e.g., Manis, Seidenberg, Doi, McBride-Chang, & Peterson, 1996; Stanovich, Siegel, & Gottardo, 1997) whereby pure cases of developmental phonological dyslexia (dysfunctional phonetic decoding processing but normal sight vocabulary processing) can exist,…
N170 Tuning in Chinese: Logographic Characters and Phonetic Pinyin Script
ERIC Educational Resources Information Center
Qin, Rui; Maurits, Natasha; Maassen, Ben
2016-01-01
In alphabetic languages, print consistently elicits enhanced, left-lateralized N170 responses in the event-related potential compared to control stimuli. In the current study, we adopted a cross-linguistic design to investigate N170 tuning to logographic Chinese and to "pinyin," an auxiliary phonetic system in Chinese. The results…
Poorer Phonetic Perceivers Show Greater Benefit in Phonetic-Phonological Speech Learning
ERIC Educational Resources Information Center
Ingvalson, Erin M.; Barr, Allison M.; Wong, Patrick C. M.
2013-01-01
Purpose: Previous research has demonstrated that native English speakers can learn lexical tones in word context (pitch-to-word learning), to an extent. However, learning success depends on learners' pre-training sensitivity to pitch patterns. The aim of this study was to determine whether lexical pitch-pattern training given before lexical…
Phonetic Realization and Perception of Prominence among Lexical Tones in Mandarin Chinese
ERIC Educational Resources Information Center
Bao, Mingzhen
2008-01-01
Linguistic prominence is defined as words or syllables perceived auditorily as standing out from their environment. It is explored through changes in pitch, duration and loudness. In this study, phonetic realization and perception of prominence among lexical tones in Mandarin Chinese was investigated in two experiments. Experiment 1 explored…
On the Phonetic Consonance in Quranic Verses-Final "Fawa?il"
ERIC Educational Resources Information Center
Aldubai, Nadhim Abdulamalek
2015-01-01
The present research aims to discuss the phonological patterns in Quranic verse-final pauses ("fawa?il") in order to provide an insight into the phonetic network governing the symmetrical and the asymmetrical pauses ("fawa?il") in terms of concordance ("al-nasaq al-?awti"). The data are collected from different parts…
Articulatory Mediation of Speech Perception: A Causal Analysis of Multi-Modal Imaging Data
ERIC Educational Resources Information Center
Gow, David W., Jr.; Segawa, Jennifer A.
2009-01-01
The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analysis of high spatiotemporal resolution…
Acoustic and Durational Properties of Indian English Vowels
ERIC Educational Resources Information Center
Maxwell, Olga; Fletcher, Janet
2009-01-01
This paper presents findings of an acoustic phonetic analysis of vowels produced by speakers of English as a second language from northern India. The monophthongal vowel productions of a group of male speakers of Hindi and male speakers of Punjabi were recorded, and acoustic phonetic analyses of vowel formant frequencies and vowel duration were…
ERIC Educational Resources Information Center
Davidson, Lisa
2011-01-01
Previous research indicates that multiple levels of linguistic information play a role in the perception and discrimination of non-native phonemes. This study examines the interaction of phonetic, phonemic and phonological factors in the discrimination of non-native phonotactic contrasts. Listeners of Catalan, English, and Russian are presented…
Report of the Phonology Laboratory, No. 2.
ERIC Educational Resources Information Center
California Univ., Berkeley. Dept. of Linguistics.
This is one of a series of reports intended to make the results of research available and to serve as progress reports. The following abstracts are included: (1) "Learning the Phonetic Cues to the Voiced-Voiceless Distinction: Preliminary Study of Four American English Speaking Children," Mel Greenlee; (2) "Learning the Phonetic Cues to the…
Variability in Phonetics. York Papers in Linguistics, No. 6.
ERIC Educational Resources Information Center
Tatham, M. A. A.
Variability is a term used to cover several types of phenomena in language sound patterns and in phonetic realization of those patterns. Variability refers to the fact that every repetition of an utterance is different, in amplitude, rate of delivery, formant frequencies, fundamental frequency or minor phase relationship changes across the sound…
A Multimedia English Learning System Using HMMs to Improve Phonemic Awareness for English Learning
ERIC Educational Resources Information Center
Lai, Yen-Shou; Tsai, Hung-Hsu; Yu, Pao-Ta
2009-01-01
This paper proposes a multimedia English learning (MEL) system, based on Hidden Markov Models (HMMs) and mastery theory strategy, for teaching students with the aim of enhancing their English phonetic awareness and pronunciation. It can analyze phonetic structures, identify and capture pronunciation errors to provide students with targeted advice…
Integration of Pragmatic and Phonetic Cues in Spoken Word Recognition
Rohde, Hannah; Ettlinger, Marc
2015-01-01
Although previous research has established that multiple top-down factors guide the identification of words during speech processing, the ultimate range of information sources that listeners integrate from different levels of linguistic structure is still unknown. In a set of experiments, we investigate whether comprehenders can integrate information from the two most disparate domains: pragmatic inference and phonetic perception. Using contexts that trigger pragmatic expectations regarding upcoming coreference (expectations for either he or she), we test listeners' identification of phonetic category boundaries (using acoustically ambiguous words on the/hi/∼/∫i/continuum). The results indicate that, in addition to phonetic cues, word recognition also reflects pragmatic inference. These findings are consistent with evidence for top-down contextual effects from lexical, syntactic, and semantic cues, but they extend this previous work by testing cues at the pragmatic level and by eliminating a statistical-frequency confound that might otherwise explain the previously reported results. We conclude by exploring the time-course of this interaction and discussing how different models of cue integration could be adapted to account for our results. PMID:22250908
One hundred years of instrumental phonetic fieldwork on North America Indian languages
NASA Astrophysics Data System (ADS)
McDonough, Joyce
2005-04-01
A resurgence of interest in phonetic fieldwork on generally morphologically complex North American Indian languages over the last 15 years is a continuation of a tradition started a century ago with the Earle Pliny Goddard, who collected kymographic and palatographic field-data between 1906-1927 on several Athabaskan languages: Coastal Athabaskan (Hupa and Kato), Apachean (Mescalero, Jicarilla, White Mountain, San Juan Carlos Apache), and several Athabaskan languages in Northern Canada (Cold Lake and Beaver); data that remains important for its record of segmental timing profiles and rare articulatory documentation in then largely monolingual communities. This data in combination with new work has resulted in the emergence of a body of knowledge of these typologically distinct families that often challenge notions of phonetic universality and typology. Using the Athabaskan languages as benchmark example and starting with Goddard's work, two types of emergent typological patterns will be discussed; the persistence of fine-grained timing and duration details across the widely dispersed family, and the broad variation in prosodic types that exists, both of which are unaccounted for by phonetic or phonological theories.
Foveal splitting causes differential processing of Chinese orthography in the male and female brain.
Hsiao, Janet Hui-Wen; Shillcock, Richard
2005-10-01
Chinese characters contain separate phonetic and semantic radicals. A dominant character type exists in which the semantic radical is on the left and the phonetic radical on the right; an opposite, minority structure also exists, with the semantic radical on the right and the phonetic radical on the left. We show that, when asked to pronounce isolated tokens of these two character types, males responded significantly faster when the phonetic information was on the right, whereas females showed a non-significant tendency in the opposite direction. Recent research on foveal structure and reading suggests that the two halves of a centrally fixated character are initially processed in different hemispheres. The male brain typically relies more on the left hemisphere for phonological processing compared with the female brain, causing this gender difference to emerge. This interaction is predicted by an implemented computational model. This study supports the existence of a gender difference in phonological processing, and shows that the effects of foveal splitting in reading extend far enough into word recognition to interact with the gender of the reader in a naturalistic reading task.
Phonetic difficulty and stuttering in English
Howell, Peter; Au-Yeung, James; Yaruss, Scott; Eldridge, Kevin
2007-01-01
Previous work has shown that phonetic difficulty affects older, but not younger, speakers who stutter and that older speakers experience more difficulty on content words than function words. The relationship between stuttering rate and a recently-developed index of phonetic complexity (IPC, Jakielski, 1998) was examined in this study separately for function and content words for speakers in 6-11, 11 plus-18 and 18 plus age groups. The hypothesis that stuttering rate on the content words of older speakers, but not younger speakers, would be related to the IPC score was supported. It is argued that the similarity between results using the IPC scores with a previous analysis that looked at late emerging consonants, consonant strings and multiple syllables (also conducted on function and content words separately), validates the former instrument. In further analyses, the factors that are most likely to lead to stuttering in English and their order of importance were established. The order found was consonant by manner, consonant by place, word length and contiguous consonant clusters. As the effects of phonetic difficulty are evident in teenage and adulthood, at least some of the factors may have an acquired influence on stuttering (rather than an innate universal basis, as the theory behind Jakielski's work suggests). This may be established in future work by doing cross-linguistic comparisons to see which factors operate universally. Disfluency on function words in early childhood appears to be responsive to factors other than phonetic complexity. PMID:17342878
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.
Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D; Kerlin, Jess R
2018-02-14
Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ ( illusion-fa ), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ ( illusion-ba ), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba , and a reduced N1 when they perceived illusion-fa , mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator). Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.
Effects of Variance and Input Distribution on the Training of L2 Learners' Tone Categorization
ERIC Educational Resources Information Center
Liu, Jiang
2013-01-01
Recent psycholinguistic findings showed that (a) a multi-modal phonetic training paradigm that encodes visual, interactive information is more effective in training L2 learners' perception of novel categories, (b) decreasing the acoustic variance of a phonetic dimension allows the learners to more effectively shift the perceptual weight towards…
He nui na ala e hiki aku ai: Factors Influencing Phonetic Variation in the Hawaiian Word "keia"
ERIC Educational Resources Information Center
Drager, Katie; Comstock, Bethany Kaleialohapau'ole Chun; Kneubuhl, Hina Puamohala
2017-01-01
Apart from a handful of studies (e.g., Kinney 1956), linguists know little about what variation exists in Hawaiian and what factors constrain the variation. In this paper, we present an analysis of phonetic variation in the word "keia," meaning "this," examining the social, linguistic, and probabilistic factors that constrain…
The Effect of Pitch Peak Alignment on Sentence Type Identification in Russian
ERIC Educational Resources Information Center
Makarova, Veronika
2007-01-01
This paper reports the results of an experimental phonetic study examining pitch peak alignment in production and perception of three-syllable one-word sentences with phonetic rising-falling pitch movement by speakers of Russian. The first part of the study (Experiment 1) utilizes 22 one-word three-syllable utterances read by five female speakers…
Sub-Lexical Phonological and Semantic Processing of Semantic Radicals: A Primed Naming Study
ERIC Educational Resources Information Center
Zhou, Lin; Peng, Gang; Zheng, Hong-Ying; Su, I-Fan; Wang, William S.-Y.
2013-01-01
Most sinograms (i.e., Chinese characters) are phonograms (phonetic compounds). A phonogram is composed of a semantic radical and a phonetic radical, with the former usually implying the meaning of the phonogram, and the latter providing cues to its pronunciation. This study focused on the sub-lexical processing of semantic radicals which are…
Training and Research in Phonetics for Spanish as a Second Language with Technological Support
ERIC Educational Resources Information Center
Blanco Canales, Ana
2013-01-01
Foreign language acquisition must inevitably start with phonetics, an aspect of language whose importance is matched only by its neglect. Different research has shown how the systematic teaching of pronunciation is beneficial not only because it aids the comprehension of messages and their expression, but also because it diminishes the anxiety…
Etymological and Phonetic Changes among Foreign Words in Kiswahili.
ERIC Educational Resources Information Center
Patel, R.B.
1967-01-01
The author considers the etymological sources and phonetic changes which have occurred in such words as "bangi,""butu,""kalua,""mrututu," and "sambarau." The source of these words, which have found a place in Swahili, has been doubted or could not be established by compilers of different Swahili dictionaries. The author feels that the study and…
ERIC Educational Resources Information Center
Hsu, Chun-Hsien; Tsai, Jie-Li; Lee, Chia-Ying; Tzeng, Ovid J. -L.
2009-01-01
In this study, event-related potentials (ERPs) were used to trace the temporal dynamics of phonological consistency and phonetic combinability in the reading of Chinese phonograms. The data showed a significant consistency-by-combinability interaction at N170. High phonetic combinability characters elicited greater negativity at N170 than did low…
Two French-Speaking Cases of Foreign Accent Syndrome: An Acoustic-Phonetic Analysis
ERIC Educational Resources Information Center
Roy, Johanna-Pascale; Macoir, Joel; Martel-Sauvageau, Vincent; Boudreault, Carol-Ann
2012-01-01
Foreign accent syndrome (FAS) is an acquired neurologic disorder in which an individual suddenly and unintentionally speaks with an accent which is perceived as being different from his/her usual accent. This study presents an acoustic-phonetic description of two Quebec French-speaking cases. The first speaker presents a perceived accent shift to…
Perceptual Training of Second-Language Vowels: Does Musical Ability Play a Role?
ERIC Educational Resources Information Center
Ghaffarvand Mokari, Payam; Werner, Stefan
2018-01-01
The present study attempts to extend the research on the effects of phonetic training on the production and perception of second-language (L2) vowels. We also examined whether success in learning L2 vowels through high-variability intensive phonetic training is related to the learners' general musical abilities. Forty Azerbaijani learners of…
ERIC Educational Resources Information Center
Sarmiento, Jose; And Others
1974-01-01
Describes the use of the verbo-tonal method of phonetic correction and the Suvaglingua synthesizer in Spanish courses at the International School of Interpreters at Mons, France. (Text is in French.) (PMP)
Phonetic Aspects of Children's Elicited Word Revisions.
ERIC Educational Resources Information Center
Paul-Brown, Diane; Yeni-Komshian, Grace H.
A study of the phonetic changes occurring when a speaker attempts to revise an unclear word for a listener focuses on changes made in the sound segment duration to maximize differences between phonemes. In the study, five-year-olds were asked by adults to revise words differing in voicing of initial and final stop consonants; a control group of…
ERIC Educational Resources Information Center
Cebrian, Juli; Carlet, Angelica
2014-01-01
This study examined the effect of short-term high-variability phonetic training on the perception of English /b/, /v/, /d/, /ð/, /ae/, /? /, /i/, and /i/ by Catalan/Spanish bilinguals learning English as a foreign language. Sixteen English-major undergraduates were tested before and after undergoing a four-session perceptual training program…
Vowel Harmony Is a Basic Phonetic Rule of the Turkic Languages
ERIC Educational Resources Information Center
Shoibekova, Gaziza B.; Odanova, Sagira A.; Sultanova, Bibigul M.; Yermekova, Tynyshtyk N.
2016-01-01
The present study comprehensively analyzes vowel harmony as an important phonetic rule in Turkic languages. Recent changes in the vowel harmony potential of Turkic sounds caused by linguistic and extra-linguistic factors were described. Vowels in the Kazakh, Turkish, and Uzbek language were compared. The way this or that phoneme sounded in the…
ERIC Educational Resources Information Center
Albareda-Castellot, Barbara; Pons, Ferran; Sebastian-Galles
2011-01-01
Contrasting results have been reported regarding the phonetic acquisition of bilinguals. A lack of discrimination has been observed for certain native contrasts in 8-month-old Catalan-Spanish bilingual infants (Bosch & Sebastian-Galles, 2003a), though not in French-English bilingual infants (Burns, Yoshida, Hill & Werker, 2007; Sundara, Polka &…
Visual Speech Contributes to Phonetic Learning in 6-Month-Old Infants
ERIC Educational Resources Information Center
Teinonen, Tuomas; Aslin, Richard N.; Alku, Paavo; Csibra, Gergely
2008-01-01
Previous research has shown that infants match vowel sounds to facial displays of vowel articulation [Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. "Science, 218", 1138-1141; Patterson, M. L., & Werker, J. F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. "Infant…
Literacy: Parent Training in the Elementary Educational System
ERIC Educational Resources Information Center
Mathis Hill, Mattie Darlene
2012-01-01
Over several years, second grade parents have expressed concerns about not understanding the curriculum in the area of phonetic coding. The purpose of this study was to give second grade parents the skills they lacked in understanding phonetic coding so they could better help their children with homework and thus see if a significant difference in…
High Variability Phonetic Training as a Bridge from Research to Practice
ERIC Educational Resources Information Center
Barriuso, Taylor Anne; Hayes-Harb, Rachel
2018-01-01
This review of high variability phonetic training (HVPT) research begins by situating HVPT in its historical context and as a methodology for studying second language (L2) pronunciation. Next we identify and discuss issues in HVPT that are of particular relevance to real-world L2 learning and teaching settings, including the generalizability of…
ERIC Educational Resources Information Center
Saito, Kazuya
2013-01-01
The present study examines whether and to what degree providing explicit phonetic information (EI) at the beginning of form-focused instruction (FFI) on second language pronunciation can enhance the generalizability and magnitude of FFI effectiveness by increasing learners' ability to notice a new phone. Participants were 49 Japanese learners of…
ERIC Educational Resources Information Center
Munson, Benjamin; Edwards, Jan; Schellinger, Sarah K.; Beckman, Mary E.; Meyer, Marie K.
2010-01-01
This article honours Adele Miccio's life work by reflecting on the utility of phonetic transcription. The first section reviews the literature on cases where children whose speech appears to neutralize a contrast in the adult language are found on closer examination to produce a contrast ("covert contrast"). This study presents evidence…
The Influence of Phonetic Complexity on Stuttered Speech
ERIC Educational Resources Information Center
Coalson, Geoffrey A.; Byrd, Courtney T.; Davis, Barbara L.
2012-01-01
The primary purpose of this study was to re-examine the influence of phonetic complexity on stuttering in young children through the use of the Word Complexity Measure (WCM). Parent-child conversations were transcribed for 14 children who stutter (mean age = 3 years, 7 months; SD = 11.20 months). Lexical and linguistic factors were accounted for…
The phonetics of talk in interaction--introduction to the special issue.
Ogden, Richard
2012-03-01
This overview paper provides an introduction to work on naturally-occurring speech data, combining techniques of conversation analysis with techniques and methods from phonetics. The paper describes the development of the field, highlighting current challenges and progress in interdisciplinary work. It considers the role of quantification and its relationship to a qualitative methodology. It presents the conversation analytic notion of sequence as a version of context, and argues that sequences of talk constrain relevant phonetic design, and so provide one account for variability in naturally occurring speech. The paper also describes the manipulation of speech and language on many levels simultaneously. All of these themes occur and are explored in more detail in the papers contained in this special issue.
Shim, Hyungsub; Hurley, Robert S; Rogalski, Emily; Mesulam, M-Marsel
2012-07-01
This study evaluates spelling errors in the three subtypes of primary progressive aphasia (PPA): agrammatic (PPA-G), logopenic (PPA-L), and semantic (PPA-S). Forty-one PPA patients and 36 age-matched healthy controls were administered a test of spelling. The total number of errors and types of errors in spelling to dictation of regular words, exception words and nonwords, were recorded. Error types were classified based on phonetic plausibility. In the first analysis, scores were evaluated by clinical diagnosis. Errors in spelling exception words and phonetically plausible errors were seen in PPA-S. Conversely, PPA-G was associated with errors in nonword spelling and phonetically implausible errors. In the next analysis, spelling scores were correlated to other neuropsychological language test scores. Significant correlations were found between exception word spelling and measures of naming and single word comprehension. Nonword spelling correlated with tests of grammar and repetition. Global language measures did not correlate significantly with spelling scores, however. Cortical thickness analysis based on MRI showed that atrophy in several language regions of interest were correlated with spelling errors. Atrophy in the left supramarginal gyrus and inferior frontal gyrus (IFG) pars orbitalis correlated with errors in nonword spelling, while thinning in the left temporal pole and fusiform gyrus correlated with errors in exception word spelling. Additionally, phonetically implausible errors in regular word spelling correlated with thinning in the left IFG pars triangularis and pars opercularis. Together, these findings suggest two independent systems for spelling to dictation, one phonetic (phoneme to grapheme conversion), and one lexical (whole word retrieval). Copyright © 2012 Elsevier Ltd. All rights reserved.
Phonetically Governed Voicing Onset and Offset in Preschool Children Who Stutter
ERIC Educational Resources Information Center
Arenas, Richard M.; Zebrowski, Patricia M.; Moon, Jerald B.
2012-01-01
Phonetically governed changes in the fundamental frequency (F[subscript 0]) of vowels that immediately precede and follow voiceless stop plosives have been found to follow consistent patterns in adults and children as young as four years of age. In the present study, F[subscript 0] onset and offset patterns in 14 children who stutter (CWS) and 14…
Phonetic Parameters and Perceptual Judgments of Accent in English by American and Japanese Listeners
ERIC Educational Resources Information Center
Riney, Timothy J.; Takagi, Naoyuki; Inutsuka, Kumiko
2005-01-01
In this study we identify some of the phonetic parameters that correlate with nonnative speakers' (NNSs) perceptual judgments of accent in English and investigate NNS listener perceptions of English from a World Englishes point of view. Our main experiment involved 3,200 assessments of the perceived degree of accent in English of two speaker…
ERIC Educational Resources Information Center
Diaz, Begona; Mitterer, Holger; Broersma, Mirjam; Sebastian-Galles, Nuria
2012-01-01
The extent to which the phonetic system of a second language is mastered varies across individuals. The present study evaluates the pattern of individual differences in late bilinguals across different phonological processes. Fifty-five late Dutch-English bilinguals were tested on their ability to perceive a difficult L2 speech contrast (the…
ERIC Educational Resources Information Center
Leutenegger, Ralph R.
The phonetic transcription ability of 78 college students whose transcription instruction was administered by means of pre-programed Language Master cards was compared with that of 81 students whose instruction was non-automated. Ability was measured by seven weekly tests. There was no significant relationship on any of 29 variables with type of…
ERIC Educational Resources Information Center
Knight, Rachael-Anne
2010-01-01
Currently little is known about how students use podcasts of exercise material (as opposed to lecture material), and whether they perceive such podcasts to be beneficial. This study aimed to assess how exercise podcasts for phonetics are used and perceived by second year speech and language therapy students. Eleven podcasts of graded phonetics…
ERIC Educational Resources Information Center
Evmenova, Anna S.; Graff, Heidi J.; Jerome, Marci Kinas; Behrmann, Michael M.
2010-01-01
This investigation examined the effects of currently available word prediction software programs that support phonetic/inventive spelling on the quality of journal writing by six students with severe writing and/or spelling difficulties in grades three through six during a month-long summer writing program. A changing conditions single-subject…
Signs and Transitions: Do They Differ Phonetically and Does It Matter?
ERIC Educational Resources Information Center
Jantunen, Tommi
2013-01-01
The point of departure of this article is the cluster of three pre-theoretical presuppositions (P) governing modern research on sign languages: (1) that a stream of signing consists of signs (S) and transitions (T), (2) that only Ss are linguistically relevant units, and (3) that there is a qualitative (e.g., phonetic) difference between Ss and…
ERIC Educational Resources Information Center
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
Talking with John Trim (Part I): A Career in Phonetics, Applied Linguistics and the Public Service
ERIC Educational Resources Information Center
Little, David; King, Lid
2013-01-01
As this issue was in preparation, the journal learned with great regret of the passing of John Trim. John was a long-serving member of the "Language Teaching" Board and his insight and advice proved invaluable for this and previous editors. An expert in the field of phonetics, linguistics, language didactics and policy, John worked…
ERIC Educational Resources Information Center
Mulak, Karen E.; Best, Catherine T.; Tyler, Michael D.; Kitamura, Christine; Irwin, Julia R.
2013-01-01
By 12 months, children grasp that a phonetic change to a word can change its identity ("phonological distinctiveness"). However, they must also grasp that some phonetic changes do "not" ("phonological constancy"). To test development of phonological constancy, sixteen 15-month-olds and sixteen 19-month-olds completed…
Multilingual Phoneme Models for Rapid Speech Processing System Development
2006-09-01
processes are used to develop an Arabic speech recognition system starting from monolingual English models, In- ternational Phonetic Association (IPA...clusters. It was found that multilingual bootstrapping methods out- perform monolingual English bootstrapping methods on the Arabic evaluation data initially...International Phonetic Alphabet . . . . . . . . . 7 2.3.2 Multilingual vs. Monolingual Speech Recognition 7 2.3.3 Data-Driven Approaches
ERIC Educational Resources Information Center
Tananuraksakul, Noparat
2017-01-01
This paper examined the hypothesis that Thai EFL students' positive attitudes toward their non-native English accented speech could be promoted by the use of University of Iowa's "Sounds of American English" website, known as the "number 1 phonetics website". Fifty-two undergraduate students in the field of sciences…
ERIC Educational Resources Information Center
Deng, Yuan; Chou, Tai-li; Ding, Guo-sheng; Peng, Dan-ling; Booth, James R.
2011-01-01
Neural changes related to the learning of the pronunciation of Chinese characters in English speakers were examined using fMRI. We examined the item-specific learning effects for trained characters and the generalization of phonetic knowledge to novel transfer characters that shared a phonetic radical (part of a character that gives a clue to the…
Independence of Early Speech Processing from Word Meaning
Travis, Katherine E.; Leonard, Matthew K.; Chan, Alexander M.; Torres, Christina; Sizemore, Marisa L.; Qu, Zhe; Eskandar, Emad; Dale, Anders M.; Elman, Jeffrey L.; Cash, Sydney S.; Halgren, Eric
2013-01-01
We combined magnetoencephalography (MEG) with magnetic resonance imaging and electrocorticography to separate in anatomy and latency 2 fundamental stages underlying speech comprehension. The first acoustic-phonetic stage is selective for words relative to control stimuli individually matched on acoustic properties. It begins ∼60 ms after stimulus onset and is localized to middle superior temporal cortex. It was replicated in another experiment, but is strongly dissociated from the response to tones in the same subjects. Within the same task, semantic priming of the same words by a related picture modulates cortical processing in a broader network, but this does not begin until ∼217 ms. The earlier onset of acoustic-phonetic processing compared with lexico-semantic modulation was significant in each individual subject. The MEG source estimates were confirmed with intracranial local field potential and high gamma power responses acquired in 2 additional subjects performing the same task. These recordings further identified sites within superior temporal cortex that responded only to the acoustic-phonetic contrast at short latencies, or the lexico-semantic at long. The independence of the early acoustic-phonetic response from semantic context suggests a limited role for lexical feedback in early speech perception. PMID:22875868
From Phonemes to Articulatory Codes: An fMRI Study of the Role of Broca's Area in Speech Production
de Zwart, Jacco A.; Jansma, J. Martijn; Pickering, Martin J.; Bednar, James A.; Horwitz, Barry
2009-01-01
We used event-related functional magnetic resonance imaging to investigate the neuroanatomical substrates of phonetic encoding and the generation of articulatory codes from phonological representations. Our focus was on the role of the left inferior frontal gyrus (LIFG) and in particular whether the LIFG plays a role in sublexical phonological processing such as syllabification or whether it is directly involved in phonetic encoding and the generation of articulatory codes. To answer this question, we contrasted the brain activation patterns elicited by pseudowords with high– or low–sublexical frequency components, which we expected would reveal areas related to the generation of articulatory codes but not areas related to phonological encoding. We found significant activation of a premotor network consisting of the dorsal precentral gyrus, the inferior frontal gyrus bilaterally, and the supplementary motor area for low– versus high–sublexical frequency pseudowords. Based on our hypothesis, we concluded that these areas and in particular the LIFG are involved in phonetic and not phonological encoding. We further discuss our findings with respect to the mechanisms of phonetic encoding and provide evidence in support of a functional segregation of the posterior part of Broca's area, the pars opercularis. PMID:19181696
Learning to talk: A non-imitative account of the replication of phonetics by child learners
NASA Astrophysics Data System (ADS)
Messum, Piers
2005-04-01
How is it that an English-speaking 5-year-old comes to: pronounce the vowel of seat to be longer than that of sit, but shorter than that of seed; say a multi-word phrase with stress-timed rhythm; aspirate the /p/s of pin, polite, and spin to different degrees? These are systematic features of English, and most people believe that a child replicates them by imitation. If so, he is paying attention to phonetic detail in adult speech that is not very significant linguistically, and then making the effort to reproduce it. With all the other communicative challenges he faces, how plausible is this? An alternative, non-imitative account of the replication of these features relies on two mechanisms: (1) emulation, and (2) the conditioning of articulatory activity by the developing characteristics of speech breathing. The phenomena above then become no more than expressions of how a child finds ways to warp his phonetic output in order to reconcile conflicting production demands. The criteria he uses to do this make the challenges both of learning to talk and then of managing the interaction of complex phonetic patterns considerably more straightforward than has been imagined.
Deng, Yuan; Chou, Tai-li; Ding, Guo-sheng; Peng, Dan-ling; Booth, James R.
2016-01-01
Neural changes related to the learning of the pronunciation of Chinese characters in English speakers were examined using fMRI. We examined the item-specific learning effects for trained characters and the generalization of phonetic knowledge to novel transfer characters that shared a phonetic radical (part of a character that gives a clue to the whole character’s pronunciation) with trained characters. Behavioral results showed that shared phonetic information improved performance for transfer characters. Neuroimaging results for trained characters over learning found increased activation in the right lingual gyrus, and greater activation enhancement in the left inferior frontal gyrus (Brodmann’s area 44) was correlated with higher accuracy improvement. Moreover, greater activation for transfer characters in these two regions at the late stage of training was correlated with better knowledge of the phonetic radical in a delayed recall test. The current study suggests that the right lingual gyrus and the left inferior frontal gyrus are crucial for the learning of Chinese characters and the generalization of that knowledge to novel characters. Left inferior frontal gyrus is likely involved in phonological segmentation, whereas right lingual gyrus may subserve processing visual–orthographic information. PMID:20807053
Mapping Phonetic Features for Voice-Driven Sound Synthesis
NASA Astrophysics Data System (ADS)
Janer, Jordi; Maestre, Esteban
In applications where the human voice controls the synthesis of musical instruments sounds, phonetics convey musical information that might be related to the sound of the imitated musical instrument. Our initial hypothesis is that phonetics are user- and instrument-dependent, but they remain constant for a single subject and instrument. We propose a user-adapted system, where mappings from voice features to synthesis parameters depend on how subjects sing musical articulations, i.e. note to note transitions. The system consists of two components. First, a voice signal segmentation module that automatically determines note-to-note transitions. Second, a classifier that determines the type of musical articulation for each transition based on a set of phonetic features. For validating our hypothesis, we run an experiment where subjects imitated real instrument recordings with their voice. Performance recordings consisted of short phrases of saxophone and violin performed in three grades of musical articulation labeled as: staccato, normal, legato. The results of a supervised training classifier (user-dependent) are compared to a classifier based on heuristic rules (user-independent). Finally, from the previous results we show how to control the articulation in a sample-concatenation synthesizer by selecting the most appropriate samples.
Signature of prosody in tonal realization: Evidence from Standard Chinese
NASA Astrophysics Data System (ADS)
Chen, Yiya
2004-05-01
It is by now widely accepted that the articulation of speech is influenced by the prosodic structure into which the utterance is organized. Furthermore, the effect of prosody on F0 realization has been shown to be mainly phonological [Beckman and Pierrehumbert (1986); Selkirk and Shen (1990)]. This paper presents data from the F0 realizations of lexical tones in Standard Chinese and shows that prosodic factors may influence the articulation of a lexical tone and induce phonetic variations in its surface F0 contours, similar to the phonetic effect of prosody on segment articulation [de Jong (1995); Keating and Foureron (1997)]. Data were elicited from four native speakers of Standard Chinese producing all four lexical tones in different tonal contexts and under various focus conditions (i.e., under focus, no focus, and post focus), with three renditions for each condition. The observed F0 variations are argued to be best analyzed as resulted from prosodically driven differences in the phonetic implementation of the lexical tonal targets, which in turn is induced by pragmatically driven differences in how distinctive an underlying tonal target should be realized. Implications of this study on the phonetic implementation of phonological tonal targets will also be discussed.
Exceptionality in vowel harmony
NASA Astrophysics Data System (ADS)
Szeredi, Daniel
Vowel harmony has been of great interest in phonological research. It has been widely accepted that vowel harmony is a phonetically natural phenomenon, which means that it is a common pattern because it provides advantages to the speaker in articulation and to the listener in perception. Exceptional patterns proved to be a challenge to the phonetically grounded analysis as they, by their nature, introduce phonetically disadvantageous sequences to the surface form, that consist of harmonically different vowels. Such forms are found, for example in the Finnish stem tuoli 'chair' or in the Hungarian suffixed form hi:d-hoz 'to the bridge', both word forms containing a mix of front and back vowels. There has recently been evidence shown that there might be a phonetic level explanation for some exceptional patterns, as the possibility that some vowels participating in irregular stems (like the vowel [i] in the Hungarian stem hi:d 'bridge' above) differ in some small phonetic detail from vowels in regular stems. The main question has not been raised, though: does this phonetic detail matter for speakers? Would they use these minor differences when they have to categorize a new word as regular or irregular? A different recent trend in explaining morphophonological exceptionality by looking at the phonotactic regularities characteristic of classes of stems based on their morphological behavior. Studies have shown that speakers are aware of these regularities, and use them as cues when they have to decide what class a novel stem belongs to. These sublexical phonotactic regularities have already been shown to be present in some exceptional patterns vowel harmony, but many questions remain open: how is learning the static generalization linked to learning the allomorph selection facet of vowel harmony? How much does the effect of consonants on vowel harmony matter, when compared to the effect of vowel-to-vowel correspondences? This dissertation aims to test these two ideas -- that speakers use phonetic cues and/or that they use sublexical phonotactic regularities in categorizing stems as regular or irregular -- and attempt to answer the more detailed questions, like the effect of consonantal patterns on exceptional patterns or the link between allomorph selection and static phonotactic generalizations as well. The phonetic hypothesis is tested on the Hungarian antiharmonicity pattern (stems with front vowels consistently selecting back suffixes, like in the example hi:d-hoz 'to the bridge' above), and the results indicate that while there may be some small phonetic differences between vowels in regular and irregular stems, speakers do not use these, or even enhanced differences when they have to categorize stems. The sublexical hypothesis is tested and confirmed by looking at the disharmonicity pattern in Finnish. In Finnish, stems that contain both back and certain front vowels are frequent and perfectly grammatical, like in the example tuoli 'chair' above, while the mixing of back and some other front vowels is very rare and mostly confined to loanwords like amatoori 'amateur'. It will be seen that speakers do use sublexical phonotactic regularities to decide on the acceptability of novel stems, but certain patterns that are phonetically or phonologically more natural (vowel-to-vowel correspondences) seem to matter much more than other effects (like consonantal effects). Finally, a computational account will be given on how exceptionality might be learned by speakers by using maximum entropy grammars available in the literature to simulate the acquisition of the Finnish disharmonicity pattern. It will be shown that in order to clearly model the overall behavior on the exact pattern, the learner has to have access not only to the lexicon, but also to the allomorph selection patterns in the language.
Parker, Mark; Cunningham, Stuart; Enderby, Pam; Hawley, Mark; Green, Phil
2006-01-01
The STARDUST project developed robust computer speech recognizers for use by eight people with severe dysarthria and concomitant physical disability to access assistive technologies. Independent computer speech recognizers trained with normal speech are of limited functional use by those with severe dysarthria due to limited and inconsistent proximity to "normal" articulatory patterns. Severe dysarthric output may also be characterized by a small mass of distinguishable phonetic tokens making the acoustic differentiation of target words difficult. Speaker dependent computer speech recognition using Hidden Markov Models was achieved by the identification of robust phonetic elements within the individual speaker output patterns. A new system of speech training using computer generated visual and auditory feedback reduced the inconsistent production of key phonetic tokens over time.
Sociophonetics: The Role of Words, the Role of Context, and the Role of Words in Context.
Hay, Jennifer
2018-03-02
This paper synthesizes a wide range of literature from sociolinguistics and cognitive psychology, to argue for a central role for the "word" as a vehicle of language variation and change. Three crucially interlinked strands of research are reviewed-the role of context in associative learning, the word-level storage of phonetic and contextual detail, and the phonetic consequences of skewed distributions of words across different contexts. I argue that the human capacity for associative learning, combined with attention to fine-phonetic detail at the level of the word, plays a significant role in predicting a range of subtle but systematically robust observed socioindexical patterns in speech production and perception. Copyright © 2018 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Munson, Benjamin; Johnson, Julie M.; Edwards, Jan
2012-01-01
Purpose: This study examined whether experienced speech-language pathologists (SLPs) differ from inexperienced people in their perception of phonetic detail in children's speech. Method: Twenty-one experienced SLPs and 21 inexperienced listeners participated in a series of tasks in which they used a visual-analog scale (VAS) to rate children's…
ERIC Educational Resources Information Center
Han, Jeong-Im; Hwang, Jong-Bai; Choi, Tae-Hwan
2011-01-01
The purpose of this study was to evaluate the acquisition of non-contrastive phonetic details of a second language. Reduced vowels in English are realized as a schwa or barred- i depending on their phonological contexts, but Korean has no reduced vowels. Two groups of Korean learners of English who differed according to the experience of residence…
On Being Echolalic: An Analysis of the Interactional and Phonetic Aspects of an Autistic's Language.
ERIC Educational Resources Information Center
Local, John; Wootton, Tony
1996-01-01
A case study analyzed the echolalia behavior of an autistic 11-year-old boy, based on recordings made in his home and school. Focus was on the subset of immediate echolalia referred to as pure echoing. Using an approach informed by conversation analysis and descriptive phonetics, distinctions are drawn between different forms of pure echo. It is…
ERIC Educational Resources Information Center
Hsiao, Janet Hui-wen
2011-01-01
In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is…
ERIC Educational Resources Information Center
Solon, Megan
2017-01-01
This study explores the second language (L2) acquisition of a segment that exists in learners' first language (L1) and in their L2 but that differs in its phonetic realization and allophonic patterning in the two languages. Specifically, this research tracks development in one aspect of the production of the alveolar lateral /l/ in the L2 Spanish…
ERIC Educational Resources Information Center
Misiurski, Cara; Blumstein, Sheila E.; Rissman, Jesse; Berman, Daniel
2005-01-01
This study examined the effects that the acoustic-phonetic structure of a stimulus exerts on the processes by which lexical candidates compete for activation. An auditory lexical decision paradigm was used to investigate whether shortening the VOT of an initial voiceless stop consonant in a real word results in the activation of the…
ERIC Educational Resources Information Center
Francais dans le Monde, 1987
1987-01-01
Four suggestions for classroom language-learning activities include a series of dot-to-dot phonetics games, small group collage making and presentation, use of magazine covers to stimulate class discussion and introduction of vocabulary, and student-written sketches based on misunderstandings. (MSE)
ERIC Educational Resources Information Center
Haskins Labs., New Haven, CT.
This document, containing 15 articles and 2 abstracts, is a report on the current status and progress of speech research. The following topics are investigated: phonological fusion, phonetic prerequisites for first-language learning, auditory and phonetic levels of processing, auditory short-term memory in vowel perception, hemispheric…
Covington, Michael A; Lunden, S L Anya; Cristofaro, Sarah L; Wan, Claire Ramsay; Bailey, C Thomas; Broussard, Beth; Fogarty, Robert; Johnson, Stephanie; Zhang, Shayi; Compton, Michael T
2012-12-01
Aprosody, or flattened speech intonation, is a recognized negative symptom of schizophrenia, though it has rarely been studied from a linguistic/phonological perspective. To bring the latest advances in computational linguistics to the phenomenology of schizophrenia and related psychotic disorders, a clinical first-episode psychosis research team joined with a phonetics/computational linguistics team to conduct a preliminary, proof-of-concept study. Video recordings from a semi-structured clinical research interview were available from 47 first-episode psychosis patients. Audio tracks of the video recordings were extracted, and after review of quality, 25 recordings were available for phonetic analysis. These files were de-noised and a trained phonologist extracted a 1-minute sample of each patient's speech. WaveSurfer 1.8.5 was used to create, from each speech sample, a file of formant values (F0, F1, F2, where F0 is the fundamental frequency and F1 and F2 are resonance bands indicating the moment-by-moment shape of the oral cavity). Variability in these phonetic indices was correlated with severity of Positive and Negative Syndrome Scale negative symptom scores using Pearson correlations. A measure of variability of tongue front-to-back position-the standard deviation of F2-was statistically significantly correlated with the severity of negative symptoms (r=-0.446, p=0.03). This study demonstrates a statistically significant and meaningful correlation between negative symptom severity and phonetically measured reductions in tongue movements during speech in a sample of first-episode patients just initiating treatment. Further studies of negative symptoms, applying computational linguistics methods, are warranted. Copyright © 2012 Elsevier B.V. All rights reserved.
Petitto, L. A.; Berens, M. S.; Kovelman, I.; Dubins, M. H.; Jasinska, K.; Shalinsky, M.
2011-01-01
In a neuroimaging study focusing on young bilinguals, we explored the brains of bilingual and monolingual babies across two age groups (younger 4–6 months, older 10–12 months), using fNIRS in a new event-related design, as babies processed linguistic phonetic (Native English, Non-Native Hindi) and nonlinguistic Tone stimuli. We found that phonetic processing in bilingual and monolingual babies is accomplished with the same language-specific brain areas classically observed in adults, including the left superior temporal gyrus (associated with phonetic processing) and the left inferior frontal cortex (associated with the search and retrieval of information about meanings, and syntactic and phonological patterning), with intriguing developmental timing differences: left superior temporal gyrus activation was observed early and remained stably active over time, while left inferior frontal cortex showed greater increase in neural activation in older babies notably at the precise age when babies’ enter the universal first-word milestone, thus revealing a first-time focal brain correlate that may mediate a universal behavioral milestone in early human language acquisition. A difference was observed in the older bilingual babies’ resilient neural and behavioral sensitivity to Non-Native phonetic contrasts at a time when monolingual babies can no longer make such discriminations. We advance the “Perceptual Wedge Hypothesis”as one possible explanation for how exposure to greater than one language may alter neural and language processing in ways that we suggest are advantageous to language users. The brains of bilinguals and multilinguals may provide the most powerful window into the full neural “extent and variability” that our human species’ language processing brain areas could potentially achieve. PMID:21724244
Covington, Michael A.; Lunden, S.L. Anya; Cristofaro, Sarah L.; Wan, Claire Ramsay; Bailey, C. Thomas; Broussard, Beth; Fogarty, Robert; Johnson, Stephanie; Zhang, Shayi; Compton, Michael T.
2012-01-01
Background Aprosody, or flattened speech intonation, is a recognized negative symptom of schizophrenia, though it has rarely been studied from a linguistic/phonological perspective. To bring the latest advances in computational linguistics to the phenomenology of schizophrenia and related psychotic disorders, a clinical first-episode psychosis research team joined with a phonetics/computational linguistics team to conduct a preliminary, proof-of-concept study. Methods Video recordings from a semi-structured clinical research interview were available from 47 first-episode psychosis patients. Audio tracks of the video recordings were extracted, and after review of quality, 25 recordings were available for phonetic analysis. These files were de-noised and a trained phonologist extracted a 1-minute sample of each patient’s speech. WaveSurfer 1.8.5 was used to create, from each speech sample, a file of formant values (F0, F1, F2, where F0 is the fundamental frequency and F1 and F2 are resonance bands indicating the moment-by-moment shape of the oral cavity). Variability in these phonetic indices was correlated with severity of Positive and Negative Syndrome Scale negative symptom scores using Pearson correlations. Results A measure of variability of tongue front-to-back position—the standard deviation of F2—was statistically significantly correlated with the severity of negative symptoms (r=−0.446, p=0.03). Conclusion This study demonstrates a statistically significant and meaningful correlation between negative symptom severity and phonetically measured reductions in tongue movements during speech in a sample of first-episode patients just initiating treatment. Further studies of negative symptoms, applying computational linguistics methods, are warranted. PMID:23102940
ERIC Educational Resources Information Center
Williams, Frederick, Ed.; And Others
In this second of two studies conducted with portions of the National Speech and Hearing Survey data, the investigators analyzed the phonetic variants from standard American English in the speech of two groups of nonstandard English speaking children. The study used samples of free speech and performance on the Gold-Fristoe Test of Articulation…
ERIC Educational Resources Information Center
Olsen, Daniel J.
2014-01-01
While speech analysis technology has become an integral part of phonetic research, and to some degree is used in language instruction at the most advanced levels, it appears to be mostly absent from the beginning levels of language instruction. In part, the lack of incorporation into the language classroom can be attributed to both the lack of…
Auditory Modeling for Noisy Speech Recognition.
2000-01-01
multiple platforms including PCs, workstations, and DSPs. A prototype version of the SOS process was tested on the Japanese Hiragana language with good...judgment among linguists. American English has 48 phonetic sounds in the ARPABET representation. Hiragana , the Japanese phonetic language, has only 20... Japanese Hiragana ," H.L. Pfister, FL 95, 1995. "State Recognition for Noisy Dynamic Systems," H.L. Pfister, Tech 2005, Chicago, 1995. "Experiences
Kyushu Neuro Psychiatry (Selected Articles),
1983-04-22
to left and vertical lines from bottom to top. (Note: it is possible to write Japanese either vertically or horizontally.) But it was possible to read...five words in phonetic characters ( hiragana ) and in Chinese characters (kanji). 3) Copying short and long sentences: short sentences of forteen...was conducted on Day 3 and Day 6. The subjects were asked to write the five hiragana (phonetic, cursive characters) and kanji (Chinese characters
ERIC Educational Resources Information Center
Petitto, L. A.; Berens, M. S.; Kovelman, I.; Dubins, M. H.; Jasinska, K.; Shalinsky, M.
2012-01-01
In a neuroimaging study focusing on young bilinguals, we explored the brains of bilingual and monolingual babies across two age groups (younger 4-6 months, older 10-12 months), using fNIRS in a new event-related design, as babies processed linguistic phonetic (Native English, Non-Native Hindi) and nonlinguistic Tone stimuli. We found that phonetic…
Van Lierde, K; Galiwango, G; Hodges, A; Bettens, K; Luyten, A; Vermeersch, H
2012-01-01
The purpose of this study was to determine the impact of partial glossectomy (using the keyhole technique) on speech intelligibility, articulation, resonance and oromyofunctional behavior. A partial glossectomy was performed in 4 children with Beckwith- Wiedemann syndrome between the ages of 0.5 and 3.1 years. An ENT assessment, a phonetic inventory, a phonemic and phonological analysis and a consensus perceptual evaluation of speech intelligibility, resonance and oromyofunctional behavior were performed. It was not possible in this study to separate the effects of the surgery from the typical developmental progress of speech sound mastery. Improved speech intelligibility, a more complete phonetic inventory, an increase in phonological skills, normal resonance and increased motor-oriented oral behavior were found in the postsurgical condition. The presence of phonetic distortions, lip incompetence and interdental tongue position were still present in the postsurgical condition. Speech therapy should be focused on correct phonetic placement and a motor-oriented approach to increase lip competence, and on functional tongue exercises and tongue lifting during the production of alveolars. Detailed analyses in a larger number of subjects with and without Beckwith-Wiedemann syndrome may help further illustrate the long-term impact of partial glossectomy. Copyright © 2011 S. Karger AG, Basel.
Phonetically Irregular Word Pronunciation and Cortical Thickness in the Adult Brain
Blackmon, Karen; Barr, William B.; Kuzniecky, Ruben; DuBois, Jonathan; Carlson, Chad; Quinn, Brian T.; Blumberg, Mark; Halgren, Eric; Hagler, Donald J.; Mikhly, Mark; Devinsky, Orrin; McDonald, Carrie R.; Dale, Anders M.; Thesen, Thomas
2010-01-01
Accurate pronunciation of phonetically irregular words (exception words) requires prior exposure to unique relationships between orthographic and phonemic features. Whether such word knowledge is accompanied by structural variation in areas associated with orthographic-to-phonemic transformations has not been investigated. We used high resolution MRI to determine whether performance on a visual word-reading test composed of phonetically irregular words, the Wechsler Test of Adult Reading (WTAR), is associated with regional variations in cortical structure. A sample of 60 right-handed, neurologically intact individuals were administered the WTAR and underwent 3T volumetric MRI. Using quantitative, surface-based image analysis, cortical thickness was estimated at each vertex on the cortical mantle and correlated with WTAR scores while controlling for age. Higher scores on the WTAR were associated with thicker cortex in bilateral anterior superior temporal gyrus, bilateral angular gyrus/posterior superior temporal gyrus, and left hemisphere intraparietal sulcus. Higher scores were also associated with thinner cortex in left hemisphere posterior fusiform gyrus and central sulcus, bilateral inferior frontal gyrus, and right hemisphere lingual gyrus and supramarginal gyrus. These results suggest that the ability to correctly pronounce phonetically irregular words is associated with structural variations in cortical areas that are commonly activated in functional neuroimaging studies of word reading, including areas associated with grapheme-to–phonemic conversion. PMID:20302944
Minicucci, Domenic; Guediche, Sara; Blumstein, Sheila E
2013-08-01
The current study explored how factors of acoustic-phonetic and lexical competition affect access to the lexical-semantic network during spoken word recognition. An auditory semantic priming lexical decision task was presented to subjects while in the MR scanner. Prime-target pairs consisted of prime words with the initial voiceless stop consonants /p/, /t/, and /k/ followed by word and nonword targets. To examine the neural consequences of lexical and sound structure competition, primes either had voiced minimal pair competitors or they did not, and they were either acoustically modified to be poorer exemplars of the voiceless phonetic category or not. Neural activation associated with semantic priming (Unrelated-Related conditions) revealed a bilateral fronto-temporo-parietal network. Within this network, clusters in the left insula/inferior frontal gyrus (IFG), left superior temporal gyrus (STG), and left posterior middle temporal gyrus (pMTG) showed sensitivity to lexical competition. The pMTG also demonstrated sensitivity to acoustic modification, and the insula/IFG showed an interaction between lexical competition and acoustic modification. These findings suggest the posterior lexical-semantic network is modulated by both acoustic-phonetic and lexical structure, and that the resolution of these two sources of competition recruits frontal structures. Copyright © 2013 Elsevier Ltd. All rights reserved.
Goswami, Usha; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Dénes
2011-01-01
Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory processing of brief, rapidly successive acoustic changes is compromised in dyslexia, thereby affecting phonetic discrimination (e.g. discriminating /b/ from /d/) via impaired discrimination of formant transitions (rapid acoustic changes in frequency and intensity). However, an alternative auditory temporal hypothesis is that the basic auditory processing of the slower amplitude modulation cues in speech is compromised (Goswami et al., 2002). Here, we contrast children's perception of a synthetic speech contrast (ba/wa) when it is based on the speed of the rate of change of frequency information (formant transition duration) versus the speed of the rate of change of amplitude modulation (rise time). We show that children with dyslexia have excellent phonetic discrimination based on formant transition duration, but poor phonetic discrimination based on envelope cues. The results explain why phonetic discrimination may be allophonic in developmental dyslexia (Serniclaes et al., 2004), and suggest new avenues for the remediation of developmental dyslexia. © 2010 Blackwell Publishing Ltd.
A Western apache writing system: the symbols of silas john.
Basso, K H; Anderson, N
1973-06-08
At the outset of this article, it was observed that the adequacy of an etic typology of written symbols could be judged by its ability to describe all the emic distinctions in all the writing systems of the world. In conclusion, we should like to return to this point and briefly examine the extent to which currently available etic concepts can be used to describe the distinctions made by Western Apaches in relation to the writing system of Silas John. Every symbol in the Silas John script may be classified as a phonetic-semantic sign. Symbols of this type denote linguistic expressions that consist of one or more words and contrast as a class with phonetic-nonsemantic signs, which denote phonemes (or phoneme clusters), syllables (or syllable clusters), and various prosodic phenomena (2, pp. 2, 248). Phonetic semantic signs are commonly partitioned into two subclasses: alogographs (which denote single words) and phraseographs (which denote on or more words). Although every symbol in the Silas John script can be assigned to one or the other of these categories, such an exercise is without justification (21). We have no evidence to suggest that Western Apaches classify symbols according to the length or complexity of their linguistic referents, and therefore the imposition of distinctions based on these criteria would be inappropriate and misleading. A far more useful contrast, and one we have already employed, is presented in most etic typologies as an opposition between compound (composite) and noncompound (noncomposite) symbols. Used to break down the category of phonetic-semantic signs, these two concepts enable us to describe more or less exactly the distinction Apaches draw between "symbol elements put together" (ke?escin ledidilgoh) and "symbol elements standing alone" (ke?- escin doledidildaahi). The former may now be defined as consisting of compound phonetic-semantic signs, while the latter is composed of noncompound phonetic-semantic signs. Up to this point, etic concepts have served us well. However, a deficiency appears when we search for a terminology that allows us to describe the distinction between "symbols that tell what to say" and "symbols that tell what to do." As far as we have been able to determine, standard typologies make no provision for this kind of contrast, apparently because their creators have tacitly assumed that systems composed of phonetic-semantic signs serve exclusively to communicate linguistic information. Consequently, the possibility that these systems might also convey nonlinguistic information seems to have been ignored. This oversight may be a product of Western ethnocentrism; after all, it is. we who use alphabets who most frequently associate writing with language (22). On the other hand, it may simply stem from the fact that systems incorporating symbols with kinesic referents are exceedingly rare and have not yet been reported. In any case, it is important to recognize that the etic inventory is not complete. Retaining the term "phonetic sign" as a label for written symbols. that denote linguistic phenomena, we propose that the term "kinetic sign" be introduced to label symbols that denote sequences of nonverbal behavior. Symbols of the latter type that simultaneously denote some unit of language may be classified as "phonetic-kinetic" signs. With these concepts, the contrast between " symbols that tell what to say" and "symbols that tell what to do" can be rephrased as one that distinguishes phonetic signs (by definition nonkinetic) from phonetic-kinetic signs. Purely kinetic signs-symbols that refer solely to physical gestures-are absent from the Silas John script. The utility of the kinetic sign and the phonetic-kinetic sign as comparative concepts must ultimately be judged on the basis of their capacity to clarify and describe emic distinctions in other systems of writing. However, as we have previously pointed out, ethnographic studies of American Indian systems that address themselves to the identification of these distinctions-and thus provide the information necessary to evaluate the relevance and applicability of etic concepts-are in very short supply. As a result, meaningful comparisons cannot be made. At this point, we simply alack the data with which to determine whether the kinetic component so prominen in the Silas John script is unique or whether it had counterparts else-where in North America. The view is still prevalent among anthropologists and linguists that the great majority of American Indian writing systems conform to one or two global "primitive" types. Our study of the Silas John script casts doubt upon this position, for it demonstrates that fundamental emic distinctions remain to be discovered and that existing etic frameworks are less than adequatelyequipped to describe them. The implications of these findings are clear. On the one hand, we must acknowledge the possibility that several structurally distinct forms of writing were developed by North America's Indian cultures. Concomitantly, we must be prepared to aabandon traditional ideas of typological similarity and simplicity among thes systems in favor of those that take into account variation and complexity.
Levels of processing with free and cued recall and unilateral temporal lobe epilepsy.
Lespinet-Najib, Véronique; N'Kaoua, Bernard; Sauzéon, Hélène; Bresson, Christel; Rougier, Alain; Claverie, Bernard
2004-04-01
This study investigates the role of the temporal lobes in levels-of-processing tasks (phonetic and semantic encoding) according to the nature of recall tasks (free and cued recall). These tasks were administered to 48 patients with unilateral temporal epilepsy (right "RTLE"=24; left "LTLE"=24) and a normal group (n=24). The results indicated that LTLE patients were impaired for semantic processing (free and cued recall) and for phonetic processing (free and cued recall), while for RTLE patients deficits appeared in free recall with semantic processing. It is suggested that the left temporal lobe is involved in all aspects of verbal memory, and that the right temporal lobe is specialized in semantic processing. Moreover, our data seem to indicate that RTLE patients present a retrieval processing impairment (semantic condition), whereas the LTLE group is characterized by encoding difficulties in the phonetic and semantic condition.
Predicting phonetic transcription agreement: Insights from research in infant vocalizations
RAMSDELL, HEATHER L.; OLLER, D. KIMBROUGH; ETHINGTON, CORINNA A.
2010-01-01
The purpose of this study is to provide new perspectives on correlates of phonetic transcription agreement. Our research focuses on phonetic transcription and coding of infant vocalizations. The findings are presumed to be broadly applicable to other difficult cases of transcription, such as found in severe disorders of speech, which similarly result in low reliability for a variety of reasons. We evaluated the predictiveness of two factors not previously documented in the literature as influencing transcription agreement: canonicity and coder confidence. Transcribers coded samples of infant vocalizations, judging both canonicity and confidence. Correlation results showed that canonicity and confidence were strongly related to agreement levels, and regression results showed that canonicity and confidence both contributed significantly to explanation of variance. Specifically, the results suggest that canonicity plays a major role in transcription agreement when utterances involve supraglottal articulation, with coder confidence offering additional power in predicting transcription agreement. PMID:17882695
NASA Astrophysics Data System (ADS)
Liberman, A. M.
1980-06-01
This report (1 April - 30 June) is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. Manuscripts cover the following topics: The perceptual equivalance of two acoustic cues for a speech contrast is specific to phonetic perception; Duplex perception of acoustic patterns as speech and nonspeech; Evidence for phonetic processing of cues to place of articulation: Perceived manner affects perceived place; Some articulatory correlates of perceptual isochrony; Effects of utterance continuity on phonetic judgments; Laryngeal adjustments in stuttering: A glottographic observation using a modified reaction paradigm; Missing -ing in reading: Letter detection errors on word endings; Speaking rate; syllable stress, and vowel identity; Sonority and syllabicity: Acoustic correlates of perception, Influence of vocalic context on perception of the (S)-(s) distinction.
ERIC Educational Resources Information Center
Bravo, Maria Antonia Lavandera; And Others
1991-01-01
Four activities for French language classroom use are presented, including a simulation of the relationships and communication within a family; pronunciation instruction through phonetic transcription; cultural awareness through students' analysis of their own and their parents' specific memories; and analysis and comparison of a literary text and…
Speech Recognition: Acoustic-Phonetic Knowledge Acquisition and Representation.
1987-09-25
the release duration is the voice onset time, or VOT. For the purpose of this investigation, alveolar flaps ( as in "butter’) and and glottalized /t/’s...Cambridge, Massachusetts 02139 Abstract females and 8 males. The other sentence was said by 7 females We discuss a framework for an acoustic-phonetic...tarned a number of semivowels. One sentence was said by 6 vowels + + "jpporte.d by a Xerox Fellowhsp Table It Features which characterite
ERIC Educational Resources Information Center
Champagne, Cecile
1984-01-01
Research is reported that showed phonetic and phonological training to be given peripheral attention or neglected in second language instruction in one geographic area. It is suggested this neglect stems from (1) low teacher and student expectation of success in attaining a native-like accent, and (2) assumptions that a non-native-like accent will…
Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc
2017-07-01
Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cho, Taehong; McQueen, James M
2011-08-01
Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or / k/, deleted or preserved) in the second word of a two-word phrase with an underlying /l/-C2-/t/ sequence. In Experiment 1 the target-bearing words had contextual lexical-semantic support. Listeners recovered deleted targets as fast and as accurately as preserved targets with both Word and Intonational Phrase (IP) boundaries between the two words. In Experiment 2, contexts were low-pass filtered. Listeners were still able to recover deleted targets as well as preserved targets in IP-boundary contexts, but better with physically-present targets than with deleted targets in Word-boundary contexts. This suggests that the benefit of having target acoustic-phonetic information emerges only when higher-order (contextual and phrase-boundary) information is not available. The strikingly efficient recovery of deleted phonemes with neither acoustic-phonetic cues nor contextual support demonstrates that language-specific phonological knowledge, rather than language-universal perceptual processes which rely on fine-grained phonetic details, is employed when the listener perceives the results of a continuous-speech process in which reduction is phonetically complete.
ERIC Educational Resources Information Center
Colonna-Preti, Paola; Taeschner, Traute
1987-01-01
Using a new method, 48 children in an elementary school in Rome, Italy, were taught a foreign language (26 English, 22 German) and tested after three years. The authors attempt to explain the variation in test results in terms of the students' attention, memory, and phonetic discrimination. (CFM)
Selections from Kuang-Ming JIH-PAO (Source Span: 17 May - 26 June 1961), Number 8 Communist China.
1961-08-31
to have a feeling of being unaccustomed to a certain new method, much like the feeling they have towards the use of phonetic symbols, Romanization or...seems to me that there is a certain unanimity among those who advocate the checking of Chinese words through phonetic sounds. The differences are...children’s mental development. We could not possibly ask children in kindergarten to learn algebra because natural maturity is also important. We have
Born with an ear for dialects? Structural plasticity in the expert phonetician brain.
Golestani, Narly; Price, Cathy J; Scott, Sophie K
2011-03-16
Are experts born with particular predispositions, or are they made through experience? We examined brain structure in expert phoneticians, individuals who are highly trained to analyze and transcribe speech. We found a positive correlation between the size of left pars opercularis and years of phonetic transcription training experience, illustrating how learning may affect brain structure. Phoneticians were also more likely to have multiple or split left transverse gyri in the auditory cortex than nonexpert controls, and the amount of phonetic transcription training did not predict auditory cortex morphology. The transverse gyri are thought to be established in utero; our results thus suggest that this gross morphological difference may have existed before the onset of phonetic training, and that its presence confers an advantage of sufficient magnitude to affect career choices. These results suggest complementary influences of domain-specific predispositions and experience-dependent brain malleability, influences that likely interact in determining not only how experience shapes the human brain but also why some individuals become engaged by certain fields of expertise.
Representational specificity of within-category phonetic variation in the mental lexicon
NASA Astrophysics Data System (ADS)
Ju, Min; Luce, Paul A.
2003-10-01
This study examines (1) whether within-category phonetic variation in voice onset time (VOT) is encoded in long-term memory and has consequences for subsequent word recognition and, if so, (2) whether such effects are greater in words with voiced counterparts (pat/bat) than those without (cow/*gow), given that VOT information is more critical for lexical discrimination in the former. Two long-term repetition priming experiments were conducted using words containing word-initial voiceless stops varying in VOT. Reaction times to a lexical decision were compared between the same and different VOT conditions in words with or without voiced counterparts. If veridical representations of each episode are preserved in memory, variation in VOT should have demonstrable effects on the magnitude of priming. However, if within-category variation is discarded and form-based representations are abstract, the variation in VOT should not mediate priming. The implications of these results for the specificity and abstractness of phonetic representations in long-term memory will be discussed.
Electrophysiological evidence for speech-specific audiovisual integration.
Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean
2014-01-01
Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.
Neural Signatures of Phonetic Learning in Adulthood: A Magnetoencephalography Study
Zhang, Yang; Kuhl, Patricia K.; Imada, Toshiaki; Iverson, Paul; Pruitt, John; Stevens, Erica B.; Kawakatsu, Masaki; Tohkura, Yoh'ichi; Nemoto, Iku
2010-01-01
The present study used magnetoencephalography (MEG) to examine perceptual learning of American English /r/ and /l/ categories by Japanese adults who had limited English exposure. A training software program was developed based on the principles of infant phonetic learning, featuring systematic acoustic exaggeration, multi-talker variability, visible articulation, and adaptive listening. The program was designed to help Japanese listeners utilize an acoustic dimension relevant for phonemic categorization of /r-l/ in English. Although training did not produce native-like phonetic boundary along the /r-l/ synthetic continuum in the second language learners, success was seen in highly significant identification improvement over twelve training sessions and transfer of learning to novel stimuli. Consistent with behavioral results, pre-post MEG measures showed not only enhanced neural sensitivity to the /r-l/ distinction in the left-hemisphere mismatch field (MMF) response but also bilateral decreases in equivalent current dipole (ECD) cluster and duration measures for stimulus coding in the inferior parietal region. The learning-induced increases in neural sensitivity and efficiency were also found in distributed source analysis using Minimum Current Estimates (MCE). Furthermore, the pre-post changes exhibited significant brain-behavior correlations between speech discrimination scores and MMF amplitudes as well as between the behavioral scores and ECD measures of neural efficiency. Together, the data provide corroborating evidence that substantial neural plasticity for second-language learning in adulthood can be induced with adaptive and enriched linguistic exposure. Like the MMF, the ECD cluster and duration measures are sensitive neural markers of phonetic learning. PMID:19457395
Listeners are maximally flexible in updating phonetic beliefs over time.
Saltzman, David; Myers, Emily
2018-04-01
Perceptual learning serves as a mechanism for listenexrs to adapt to novel phonetic information. Distributional tracking theories posit that this adaptation occurs as a result of listeners accumulating talker-specific distributional information about the phonetic category in question (Kleinschmidt & Jaeger, 2015, Psychological Review, 122). What is not known is how listeners build these talker-specific distributions; that is, if they aggregate all information received over a certain time period, or if they rely more heavily upon the most recent information received and down-weight older, consolidated information. In the present experiment, listeners were exposed to four interleaved blocks of a lexical decision task and a phonetic categorization task in which the lexical decision blocks were designed to bias perception in opposite directions along a "s"-"sh" continuum. Listeners returned several days later and completed the identical task again. Evidence was consistent with listeners using a relatively short temporal window of integration at the individual session level. Namely, in each individual session, listeners' perception of a "s"-"sh" contrast was biased by the information in the immediately preceding lexical decision block, and there was no evidence that listeners summed their experience with the talker over the entire session. Similarly, the magnitude of the bias effect did not change between sessions, consistent with the idea that talker-specific information remains flexible, even after consolidation. In general, results suggest that listeners are maximally flexible when considering how to categorize speech from a novel talker.
NASA Astrophysics Data System (ADS)
1992-06-01
Phonology is traditionally seen as the discipline that concerns itself with the building blocks of linguistic messages. It is the study of the structure of sound inventories of languages and of the participation of sounds in rules or processes. Phonetics, in contrast, concerns speech sounds as produced and perceived. Two extreme positions on the relationship between phonological messages and phonetic realizations are represented in the literature. One holds that the primary home for linguistic symbols, including phonological ones, is the human mind, itself housed in the human brain. The second holds that their primary home is the human vocal tract.
Electronic publishing: opportunities and challenges for clinical linguistics and phonetics.
Powell, Thomas W; Müller, Nicole; Ball, Martin J
2003-01-01
This paper discusses the contributions of informatics technology to the field of clinical linguistics and phonetics. The electronic publication of research reports and books has facilitated both the dissemination and the retrieval of scientific information. Electronic archives of speech and language corpora, too, stimulate research efforts. Although technology provides many opportunities, there remain significant challenges. Establishment and maintenance of scientific archives is largely dependent upon volunteer efforts, and there are few standards to ensure long-term access. Coordinated efforts and peer review are necessary to ensure utility and quality.
Training in Temporal Information Processing Ameliorates Phonetic Identification.
Szymaszek, Aneta; Dacewicz, Anna; Urban, Paulina; Szelag, Elzbieta
2018-01-01
Many studies revealed a link between temporal information processing (TIP) in a millisecond range and speech perception. Previous studies indicated a dysfunction in TIP accompanied by deficient phonemic hearing in children with specific language impairment (SLI). In this study we concentrate in SLI on phonetic identification, using the voice-onset-time (VOT) phenomenon in which TIP is built-in. VOT is crucial for speech perception, as stop consonants (like /t/ vs. /d/) may be distinguished by an acoustic difference in time between the onsets of the consonant (stop release burst) and the following vibration of vocal folds (voicing). In healthy subjects two categories (voiced and unvoiced) are determined using VOT task. The present study aimed at verifying whether children with SLI indicate a similar pattern of phonetic identification as their healthy peers and whether the intervention based on TIP results in improved performance on the VOT task. Children aged from 5 to 8 years ( n = 47) were assigned into two groups: normal children without any language disability (NC, n = 20), and children with SLI ( n = 27). In the latter group participants were randomly classified into two treatment subgroups, i.e., experimental temporal training (EG, n = 14) and control non-temporal training (CG, n = 13). The analyzed indicators of phonetic identification were: (1) the boundary location (α) determined as the VOT value corresponding to 50% voicing/unvoicing distinctions; (2) ranges of voiced/unvoiced categories; (3) the slope of identification curve (β) reflecting the identification correctness; (4) percent of voiced distinctions within the applied VOT spectrum. The results indicated similar α values and similar ranges of voiced/unvoiced categories between SLI and NC. However, β in SLI was significantly higher than that in NC. After the intervention, the significant improvement of β was observed only in EG. They achieved the level of performance comparable to that observed in NC. The training-related improvement in CG was non-significant. Furthermore, only in EG the β values in post-test correlated with measures of TIP as well as with phonemic hearing obtained in our previous studies. These findings provide another evidence that TIP is omnipresent in language communication and reflected not only in phonemic hearing but also in phonetic identification.
Brown, Helen; Clayards, Meghan
2017-01-01
Background High talker variability (i.e., multiple voices in the input) has been found effective in training nonnative phonetic contrasts in adults. A small number of studies suggest that children also benefit from high-variability phonetic training with some evidence that they show greater learning (more plasticity) than adults given matched input, although results are mixed. However, no study has directly compared the effectiveness of high versus low talker variability in children. Methods Native Greek-speaking eight-year-olds (N = 52), and adults (N = 41) were exposed to the English /i/-/ɪ/ contrast in 10 training sessions through a computerized word-learning game. Pre- and post-training tests examined discrimination of the contrast as well as lexical learning. Participants were randomly assigned to high (four talkers) or low (one talker) variability training conditions. Results Both age groups improved during training, and both improved more while trained with a single talker. Results of a three-interval oddity discrimination test did not show the predicted benefit of high-variability training in either age group. Instead, children showed an effect in the reverse direction—i.e., reliably greater improvements in discrimination following single talker training, even for untrained generalization items, although the result is qualified by (accidental) differences between participant groups at pre-test. Adults showed a numeric advantage for high-variability but were inconsistent with respect to voice and word novelty. In addition, no effect of variability was found for lexical learning. There was no evidence of greater plasticity for phonetic learning in child learners. Discussion This paper adds to the handful of studies demonstrating that, like adults, child learners can improve their discrimination of a phonetic contrast via computerized training. There was no evidence of a benefit of training with multiple talkers, either for discrimination or word learning. The results also do not support the findings of greater plasticity in child learners found in a previous paper (Giannakopoulou, Uther & Ylinen, 2013a). We discuss these results in terms of various differences between training and test tasks used in the current work compared with previous literature. PMID:28584698
Giannakopoulou, Anastasia; Brown, Helen; Clayards, Meghan; Wonnacott, Elizabeth
2017-01-01
High talker variability (i.e., multiple voices in the input) has been found effective in training nonnative phonetic contrasts in adults. A small number of studies suggest that children also benefit from high-variability phonetic training with some evidence that they show greater learning (more plasticity) than adults given matched input, although results are mixed. However, no study has directly compared the effectiveness of high versus low talker variability in children. Native Greek-speaking eight-year-olds ( N = 52), and adults ( N = 41) were exposed to the English /i/-/ɪ/ contrast in 10 training sessions through a computerized word-learning game. Pre- and post-training tests examined discrimination of the contrast as well as lexical learning. Participants were randomly assigned to high (four talkers) or low (one talker) variability training conditions. Both age groups improved during training, and both improved more while trained with a single talker. Results of a three-interval oddity discrimination test did not show the predicted benefit of high-variability training in either age group. Instead, children showed an effect in the reverse direction-i.e., reliably greater improvements in discrimination following single talker training, even for untrained generalization items, although the result is qualified by (accidental) differences between participant groups at pre-test. Adults showed a numeric advantage for high-variability but were inconsistent with respect to voice and word novelty. In addition, no effect of variability was found for lexical learning. There was no evidence of greater plasticity for phonetic learning in child learners. This paper adds to the handful of studies demonstrating that, like adults, child learners can improve their discrimination of a phonetic contrast via computerized training. There was no evidence of a benefit of training with multiple talkers, either for discrimination or word learning. The results also do not support the findings of greater plasticity in child learners found in a previous paper (Giannakopoulou, Uther & Ylinen, 2013a). We discuss these results in terms of various differences between training and test tasks used in the current work compared with previous literature.
Linking Cognitive and Social Aspects of Sound Change Using Agent-Based Modeling.
Harrington, Jonathan; Kleber, Felicitas; Reubold, Ulrich; Schiel, Florian; Stevens, Mary
2018-03-26
The paper defines the core components of an interactive-phonetic (IP) sound change model. The starting point for the IP-model is that a phonological category is often skewed phonetically in a certain direction by the production and perception of speech. A prediction of the model is that sound change is likely to come about as a result of perceiving phonetic variants in the direction of the skew and at the probabilistic edge of the listener's phonological category. The results of agent-based computational simulations applied to the sound change in progress, /u/-fronting in Standard Southern British, were consistent with this hypothesis. The model was extended to sound changes involving splits and mergers by using the interaction between the agents to drive the phonological reclassification of perceived speech signals. The simulations showed no evidence of any acoustic change when this extended model was applied to Australian English data in which /s/ has been shown to retract due to coarticulation in /str/ clusters. Some agents nevertheless varied in their phonological categorizations during interaction between /str/ and /ʃtr/: This vacillation may represent the potential for sound change to occur. The general conclusion is that many types of sound change are the outcome of how phonetic distributions are oriented with respect to each other, their association to phonological classes, and how these types of information vary between speakers that happen to interact with each other. Copyright © 2018 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Perception of initial obstruent voicing is influenced by gestural organization
Best, Catherine T.; Hallé, Pierre A.
2009-01-01
Cross-language differences in phonetic settings for phonological contrasts of stop voicing have posed a challenge for attempts to relate specific phonological features to specific phonetic details. We probe the phonetic-phonological relationship for voicing contrasts more broadly, analyzing in particular their relevance to nonnative speech perception, from two theoretical perspectives: feature geometry and articulatory phonology. Because these perspectives differ in assumptions about temporal/phasing relationships among features/gestures within syllable onsets, we undertook a cross-language investigation on perception of obstruent (stop, fricative) voicing contrasts in three nonnative onsets that use a common set of features/gestures but with differing time-coupling. Listeners of English and French, which differ in their phonetic settings for word-initial stop voicing distinctions, were tested on perception of three onset types, all nonnative to both English and French, that differ in how initial obstruent voicing is coordinated with a lateral feature/gesture and additional obstruent features/gestures. The targets, listed from least complex to most complex onsets, were: a lateral fricative voicing distinction (Zulu /ɬ/-ɮ/), a laterally-released affricate voicing distinction (Tlingit /tɬ/-/dɮ/), and a coronal stop voicing distinction in stop+/l/ clusters (Hebrew /tl/-/dl/). English and French listeners' performance reflected the differences in their native languages' stop voicing distinctions, compatible with prior perceptual studies on singleton consonant onsets. However, both groups' abilities to perceive voicing as a separable parameter also varied systematically with the structure of the target onsets, supporting the notion that the gestural organization of syllable onsets systematically affects perception of initial voicing distinctions. PMID:20228878
Cross-language Activation and the Phonetics of Code-switching
NASA Astrophysics Data System (ADS)
Piccinini, Page Elizabeth
It is now well established that bilinguals have both languages activated to some degree at all times. This cross-language activation has been documented in several research paradigms, including picture naming, reading, and electrophysiological studies. What is less well understood is how the degree a language is activated can vary in different language environments or contexts. Furthermore, when investigating effects of order of acquisition and language dominance, past research has been mixed, as the two variables are often conflated. In this dissertation, I test how degree of cross-language activation can vary according to context by examining phonetic productions in code-switching speech. Both spontaneous speech and scripted speech are analyzed. Follow-up perception experiments are conducted to see if listeners are able to anticipate language switches, potentially due to the phonetic cues in the signal. Additionally, by focusing on early bilinguals who are L1 Spanish but English dominant, I am able to see what plays a greater role in cross-language activation, order of acquisition or language dominance. I find that speakers do have intermediate phonetic productions in code-switching contexts relative to monolingual contexts. Effects are larger and more consistent in English than Spanish. Similar effects are found in speech perception. Listeners are able to anticipate language switches from English to Spanish but not Spanish to English. Together these results suggest that language dominance is a more important factor than order of acquisition in cross-language activation for early bilinguals. Future models on bilingual language organization and access should take into account both context and language dominance when modeling degrees of cross-language activation.
Some Problems of American Students in Mastering Persian Phonology.
NASA Astrophysics Data System (ADS)
Ghadessy, Esmael
1988-12-01
An adult learning to speak a foreign language normally retains an "accent" which may affect the intelligibility of certain sounds, but more often simply conveys the fact that the speaker is a non-native speaker. Various scholars have experimented and discussed the elements involved in a foreign accent. However, in Iran very few researchers have attempted to verify scientifically what are the phonetic and phonological aspects of an "accent." This author tried to determine whether or not a selected group of words, emphasizing stop voicing, produced by native speakers of Persian had significant phonetic and phonemic differences from those achieved by the American students. Subjects for the experiments were three groups of students, one Iranian, two American. A contrastive analysis of the Persian and the English stop consonants was made. An identical measurement test for all three groups was administered. Utilized was a Kay Sona-graph for acoustic analysis, and all spoken data from the Iranian group were compared with those of the American groups. An examination of acoustic correlates of Tehran stops produced by American students shows that the phonetically different but similar feature of /voice/ found in Tehran, Persian and English stops is intuitive to the Americans, and that the language learner cannot readily disassociate a phonological feature from habits of articulation. The results of this research support using the phonetic method for adult learners who want to improve their pronunciation ability. Further research and experimentation is necessary on the effect of the suprasegmental elements on a foreign accent and the most effective teaching materials and methods and to explore other possible techniques in the teaching process.
Identification of prelinguistic phonological categories.
Ramsdell, Heather L; Oller, D Kimbrough; Buder, Eugene H; Ethington, Corinna A; Chorna, Lesya
2012-12-01
The prelinguistic infant's babbling repertoire of syllables--the phonological categories that form the basis for early word learning--is noticed by caregivers who interact with infants around them. Prior research on babbling has not explored the caregiver's role in recognition of early vocal categories as foundations for word learning. In the present work, the authors begin to address this gap. The authors explored vocalizations produced by 8 infants at 3 ages (8, 10, and 12 months) in studies illustrating identification of phonological categories through caregiver report, laboratory procedures simulating the caregiver's natural mode of listening, and the more traditional laboratory approach (phonetic transcription). Caregivers reported small repertoires of syllables for their infants. Repertoires of similar size and phonetic content were discerned in the laboratory by judges who simulated the caregiver's natural mode of listening. However, phonetic transcription with repeated listening to infant recordings yielded repertoire sizes that vastly exceeded those reported by caregivers and naturalistic listeners. The results suggest that caregiver report and naturalistic listening by laboratory staff can provide a new way to explore key characteristics of early infant vocal categories, a way that may provide insight into later speech and language development.
A Dual-Route Model that Learns to Pronounce English Words
NASA Technical Reports Server (NTRS)
Remington, Roger W.; Miller, Craig S.; Null, Cynthia H. (Technical Monitor)
1995-01-01
This paper describes a model that learns to pronounce English words. Learning occurs in two modules: 1) a rule-based module that constructs pronunciations by phonetic analysis of the letter string, and 2) a whole-word module that learns to associate subsets of letters to the pronunciation, without phonetic analysis. In a simulation on a corpus of over 300 words the model produced pronunciation latencies consistent with the effects of word frequency and orthographic regularity observed in human data. Implications of the model for theories of visual word processing and reading instruction are discussed.
Ototake, Harumi; Yamada, Jun
2005-10-01
The same syllables /mu/ and /ra/ written in Japanese hiragana and romaji given on a standard speeded naming task elicited phonetically or acoustically different responses in a syllabic hiragana condition and a romaji condition. The participants were two groups of Japanese college students (ns = 15 and 16) with different familiarity with English as a second language. The results suggested that the phonetic reality of syllables represented in these scripts can differ, depending on the interaction between the kind of script and speakers' orthographic familiarity.
Turker, Sabrina; Reiterer, Susanne M; Seither-Preisler, Annemarie; Schneider, Peter
2017-01-01
Recent research has shown that the morphology of certain brain regions may indeed correlate with a number of cognitive skills such as musicality or language ability. The main aim of the present study was to explore the extent to which foreign language aptitude, in particular phonetic coding ability, is influenced by the morphology of Heschl's gyrus (HG; auditory cortex), working memory capacity, and musical ability. In this study, the auditory cortices of German-speaking individuals ( N = 30; 13 males/17 females; aged 20-40 years) with high and low scores in a number of language aptitude tests were compared. The subjects' language aptitude was measured by three different tests, namely a Hindi speech imitation task (phonetic coding ability), an English pronunciation assessment, and the Modern Language Aptitude Test (MLAT). Furthermore, working memory capacity and musical ability were assessed to reveal their relationship with foreign language aptitude. On the behavioral level, significant correlations were found between phonetic coding ability, English pronunciation skills, musical experience, and language aptitude as measured by the MLAT. Parts of all three tests measuring language aptitude correlated positively and significantly with each other, supporting their validity for measuring components of language aptitude. Remarkably, the number of instruments played by subjects showed significant correlations with all language aptitude measures and musicality, whereas, the number of foreign languages did not show any correlations. With regard to the neuroanatomy of auditory cortex, adults with very high scores in the Hindi testing and the musicality test (AMMA) demonstrated a clear predominance of complete posterior HG duplications in the right hemisphere. This may reignite the discussion of the importance of the right hemisphere for language processing, especially when linked or common resources are involved, such as the inter-dependency between phonetic and musical aptitude.
Lively, Scott E.; Pisoni, David B.; Yamada, Reiko A.; Tohkura, Yoh’ichi; Yamada, Tsuneo
2012-01-01
Monolingual speakers of Japanese were trained to identify English /r/ and /l/ using Logan et al.’s [J. Acoust. Soc. Am. 89, 874–886 (1991)] high-variability training procedure. Subjects’ performance improved from the pretest to the post-test and during the 3 weeks of training. Performance during training varied as a function of talker and phonetic environment. Generalization accuracy to new words depended on the voice of the talker producing the /r/–/l/ contrast: Subjects were significantly more accurate when new words were produced by a familiar talker than when new words were produced by an unfamiliar talker. This difference could not be attributed to differences in intelligibility of the stimuli. Three and six months after the conclusion of training, subjects returned to the laboratory and were given the post-test and tests of generalization again. Performance was surprisingly good on each test after 3 months without any further training: Accuracy decreased only 2% from the post-test given at the end of training to the post-test given 3 months later. Similarly, no significant decrease in accuracy was observed for the tests of generalization. After 6 months without training, subjects’ accuracy was still 4.5% above pretest levels. Performance on the tests of generalization did not decrease and significant differences were still observed between talkers. The present results suggest that the high-variability training paradigm encourages a long-term modification of listeners’ phonetic perception. Changes in perception are brought about by shifts in selective attention to the acoustic cues that signal phonetic contrasts. These modifications in attention appear to be retrained over time, despite the fact that listeners are not exposed to the /r/–/l/ contrast in their native language environment. PMID:7963022
Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D
2008-07-01
English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.
Phonetic diversity, statistical learning, and acquisition of phonology.
Pierrehumbert, Janet B
2003-01-01
In learning to perceive and produce speech, children master complex language-specific patterns. Daunting language-specific variation is found both in the segmental domain and in the domain of prosody and intonation. This article reviews the challenges posed by results in phonetic typology and sociolinguistics for the theory of language acquisition. It argues that categories are initiated bottom-up from statistical modes in use of the phonetic space, and sketches how exemplar theory can be used to model the updating of categories once they are initiated. It also argues that bottom-up initiation of categories is successful thanks to the perception-production loop operating in the speech community. The behavior of this loop means that the superficial statistical properties of speech available to the infant indirectly reflect the contrastiveness and discriminability of categories in the adult grammar. The article also argues that the developing system is refined using internal feedback from type statistics over the lexicon, once the lexicon is well-developed. The application of type statistics to a system initiated with surface statistics does not cause a fundamental reorganization of the system. Instead, it exploits confluences across levels of representation which characterize human language and make bootstrapping possible.
Infants Encode Phonetic Detail during Cross-Situational Word Learning
Escudero, Paola; Mulak, Karen E.; Vlach, Haley A.
2016-01-01
Infants often hear new words in the context of more than one candidate referent. In cross-situational word learning (XSWL), word-object mappings are determined by tracking co-occurrences of words and candidate referents across multiple learning events. Research demonstrates that infants can learn words in XSWL paradigms, suggesting that it is a viable model of real-world word learning. However, these studies have all presented infants with words that have no or minimal phonological overlap (e.g., BLICKET and GAX). Words often contain some degree of phonological overlap, and it is unknown whether infants can simultaneously encode fine phonological detail while learning words via XSWL. We tested 12-, 15-, 17-, and 20-month-olds’ XSWL of eight words that, when paired, formed non-minimal pairs (MPs; e.g., BON–DEET) or MPs (e.g., BON–TON, DEET–DIT). The results demonstrated that infants are able to learn word-object mappings and encode them with sufficient phonetic detail as to identify words in both non-minimal and MP contexts. Thus, this work suggests that infants are able to simultaneously discriminate phonetic differences between words and map words to referents in an implicit learning paradigm such as XSWL. PMID:27708605
Two ways to listen: Do L2-dominant bilinguals perceive stop voicing according to language mode?
Antoniou, Mark; Tyler, Michael D.; Best, Catherine T.
2012-01-01
How listeners categorize two phones predicts the success with which they will discriminate the given phonetic distinction. In the case of bilinguals, such perceptual patterns could reveal whether the listener’s two phonological systems are integrated or separate. This is of particular interest when a given contrast is realized differently in each language, as is the case with Greek and English stop-voicing distinctions. We had Greek–English early sequential bilinguals and Greek and English monolinguals (baselines) categorize, rate, and discriminate stop-voicing contrasts in each language. All communication with each group of bilinguals occurred solely in one language mode, Greek or English. The monolingual groups showed the expected native-language constraints, each perceiving their native contrast more accurately than the opposing nonnative contrast. Bilinguals’ category-goodness ratings for the same physical stimuli differed, consistent with their language mode, yet their discrimination performance was unaffected by language mode and biased toward their dominant language (English). We conclude that bilinguals integrate both languages in a common phonetic space that is swayed by their long-term dominant language environment for discrimination, but that they selectively attend to language-specific phonetic information for phonologically motivated judgments (category-goodness ratings). PMID:22844163
Age-related changes in the anticipatory coarticulation in the speech of young children
NASA Astrophysics Data System (ADS)
Parson, Mathew; Lloyd, Amanda; Stoddard, Kelly; Nissen, Shawn L.
2003-10-01
This paper investigates the possible patterns of anticipatory coarticulation in the speech of young children. Speech samples were elicited from three groups of children between 3 and 6 years of age and one comparison group of adults. The utterances were recorded online in a quiet room environment using high quality microphones and direct analog-to-digital conversion to computer disk. Formant frequency measures (F1, F2, and F3) were extracted from a centralized and unstressed vowel (schwa) spoken prior to two different sets of productions. The first set of productions consisted of the target vowel followed by a series of real words containing an initial CV(C) syllable (voiceless obstruent-monophthongal vowel) in a range of phonetic contexts, while the second set consisted of a series of nonword productions with a relatively constrained phonetic context. An analysis of variance was utilized to determine if the formant frequencies varied systematically as a function of age, gender, and phonetic context. Results will also be discussed in association with spectral moment measures extracted from the obstruent segment immediately following the target vowel. [Work supported by research funding from Brigham Young University.
Auditory attention strategy depends on target linguistic properties and spatial configurationa)
McCloy, Daniel R.; Lee, Adrian K. C.
2015-01-01
Whether crossing a busy intersection or attending a large dinner party, listeners sometimes need to attend to multiple spatially distributed sound sources or streams concurrently. How they achieve this is not clear—some studies suggest that listeners cannot truly simultaneously attend to separate streams, but instead combine attention switching with short-term memory to achieve something resembling divided attention. This paper presents two oddball detection experiments designed to investigate whether directing attention to phonetic versus semantic properties of the attended speech impacts listeners' ability to divide their auditory attention across spatial locations. Each experiment uses four spatially distinct streams of monosyllabic words, variation in cue type (providing phonetic or semantic information), and requiring attention to one or two locations. A rapid button-press response paradigm is employed to minimize the role of short-term memory in performing the task. Results show that differences in the spatial configuration of attended and unattended streams interact with linguistic properties of the speech streams to impact performance. Additionally, listeners may leverage phonetic information to make oddball detection judgments even when oddballs are semantically defined. Both of these effects appear to be mediated by the overall complexity of the acoustic scene. PMID:26233011
A weighted reliability measure for phonetic transcription.
Oller, D Kimbrough; Ramsdell, Heather L
2006-12-01
The purpose of the present work is to describe and illustrate the utility of a new tool for assessment of transcription agreement. Traditional measures have not characterized overall transcription agreement with sufficient resolution, specifically because they have often treated all phonetic differences between segments in transcriptions as equivalent, thus constituting an unweighted approach to agreement assessment. The measure the authors have developed calculates a weighted transcription agreement value based on principles derived from widely accepted tenets of phonological theory. To investigate the utility of the new measure, 8 coders transcribed samples of speech and infant vocalizations. Comparing the transcriptions through a computer-based implementation of the new weighted and the traditional unweighted measures, they investigated the scaling properties of both. The results illustrate better scaling with the weighted measure, in particular because the weighted measure is not subject to the floor effects that occur with the traditional measure when applied to samples that are difficult to transcribe. Furthermore, the new weighted measure shows orderly relations in degree of agreement across coded samples of early canonical-stage babbling, early meaningful speech in English, and 3 adult languages. The authors conclude that the weighted measure may provide improved foundations for research on phonetic transcription and for monitoring of transcription reliability.
Context effects on second-language learning of tonal contrasts.
Chang, Charles B; Bowles, Anita R
2015-12-01
Studies of lexical tone learning generally focus on monosyllabic contexts, while reports of phonetic learning benefits associated with input variability are based largely on experienced learners. This study trained inexperienced learners on Mandarin tonal contrasts to test two hypotheses regarding the influence of context and variability on tone learning. The first hypothesis was that increased phonetic variability of tones in disyllabic contexts makes initial tone learning more challenging in disyllabic than monosyllabic words. The second hypothesis was that the learnability of a given tone varies across contexts due to differences in tonal variability. Results of a word learning experiment supported both hypotheses: tones were acquired less successfully in disyllables than in monosyllables, and the relative difficulty of disyllables was closely related to contextual tonal variability. These results indicate limited relevance of monosyllable-based data on Mandarin learning for the disyllabic majority of the Mandarin lexicon. Furthermore, in the short term, variability can diminish learning; its effects are not necessarily beneficial but dependent on acquisition stage and other learner characteristics. These findings thus highlight the importance of considering contextual variability and the interaction between variability and type of learner in the design, interpretation, and application of research on phonetic learning.
NASA Astrophysics Data System (ADS)
Messum, Piers
2004-05-01
Is imitation a necessary part of learning to talk? The faithful replication by children of such arbitrary phenomena of English as tense and lax vowel properties, ``rhythm,'' and context-dependent VOT's seems to insist that it is. But a nonimitative account of this is also possible. It relies on two principal mechanisms. First, basic speech sounds are learned by emulation: attempting to reproduce the results achieved by other speakers but without copying their actions to do so. The effectiveness of the output provides sufficient feedback to inform the child of the adequacy of its performance and to guide refinement. Second, phonetic phenomena such as those above appear through aerodynamic accommodation. Key elements of this are (a) that speech breathing is a complex motor skill which dominates other articulatory processes during acquisition and starts pulsatile before becoming smooth, and (b) that a child-scale production system imposes constraints on talking which do not operate in the adult speaker. Much of ``the terrible complexity of phonetic patterns'' [J. Pierrehumbert, Lang. Speech 46, 115-154 (2003)] then becomes epiphenomenal: appearing not as a result of young learners copying phonetic detail that is not linguistically significant, but of them reconciling conflicting production demands while just talking.
Munson, Benjamin; Edwards, Jan; Schellinger, Sarah K; Beckman, Mary E; Meyer, Marie K
2010-01-01
This article honours Adele Miccio's life work by reflecting on the utility of phonetic transcription. The first section reviews the literature on cases where children whose speech appears to neutralize a contrast in the adult language are found on closer examination to produce a contrast (covert contrast). This study presents evidence from a new series of perception studies that covert contrast may be far more prevalent in children's speech than existing studies would suggest. The second section presents the results of a new study designed to examine whether naïve listeners' perception of children's /s/ and /theta/ productions can be changed experimentally when they are led to believe that the children who produced the sounds were older or younger. Here, it is shown that, under the right circumstances, adults report more tokens of /theta/ to be accurate productions of /s/ when they believe a talker to be an older child than when they believe the talker to be younger. This finding suggests that auditory information alone cannot be the sole basis for judging the accuracy of a sound. The final section presents recommendations for supplementing phonetic transcription with other measures, to gain a fuller picture of children's production abilities.
Munson, Benjamin; Edwards, Jan; Schellinger, Sarah; Beckman, Mary E.; Meyer, Marie K.
2010-01-01
This article honours Adele Miccio's life work by reflecting on the utility of phonetic transcription. The first section reviews the literature on cases where children whose speech appears to neutralize a contrast in the adult language are found on closer examination to produce a contrast (covert contrast). We present evidence from a new series of perception studies that covert contrast may be far more prevalent in children's speech than existing studies would suggest. The second section presents the results of a new study designed to examine whether naïve listeners' perception of children's /s/ and /θ/ productions can be changed experimentally when they are led to believe that the children who produced the sounds were older or younger. Here, it is shown that, under the right circumstances, adults report more tokens of /θ/ to be accurate productions of /s/ when they believe a talker to be an older child than when they believe the talker to be younger. This finding suggests that auditory information alone cannot be the sole basis for judging the accuracy of a sound. The final section presents recommendations for supplementing phonetic transcription with other measures, to gain a fuller picture of children's production abilities. PMID:20345255
Blanco, Cynthia P.; Bannard, Colin; Smiljanic, Rajka
2016-01-01
Early bilinguals often show as much sensitivity to L2-specific contrasts as monolingual speakers of the L2, but most work on cross-language speech perception has focused on isolated segments, and typically only on neighboring vowels or stop contrasts. In tasks that include sounds in context, listeners’ success is more variable, so segment discrimination in isolation may not adequately represent the phonetic detail in stored representations. The current study explores the relationship between language experience and sensitivity to segmental cues in context by comparing the categorization patterns of monolingual English listeners and early and late Spanish–English bilinguals. Participants categorized nonce words containing different classes of English- and Spanish-specific sounds as being more English-like or more Spanish-like; target segments included phonemic cues, cues for which there is no analogous sound in the other language, or phonetic cues, cues for which English and Spanish share the category but for which each language varies in its phonetic implementation. Listeners’ language categorization accuracy and reaction times were analyzed. Our results reveal a largely uniform categorization pattern across listener groups: Spanish cues were categorized more accurately than English cues, and phonemic cues were easier for listeners to categorize than phonetic cues. There were no differences in the sensitivity of monolinguals and early bilinguals to language-specific cues, suggesting that the early bilinguals’ exposure to Spanish did not fundamentally change their representations of English phonology. However, neither did the early bilinguals show more sensitivity than the monolinguals to Spanish sounds. The late bilinguals however, were significantly more accurate than either of the other groups. These findings indicate that listeners with varying exposure to English and Spanish are able to use language-specific cues in a nonce-word language categorization task. Differences in how, and not only when, a language was acquired may influence listener sensitivity to more difficult cues, and the advantage for phonemic cues may reflect the greater salience of categories unique to each language. Implications for foreign-accent categorization and cross-language speech perception are discussed, and future directions are outlined to better understand how salience varies across language-specific phonemic and phonetic cues. PMID:27445947
Francis, Alexander L; Driscoll, Courtney
2006-09-01
We examined the effect of perceptual training on a well-established hemispheric asymmetry in speech processing. Eighteen listeners were trained to use a within-category difference in voice onset time (VOT) to cue talker identity. Successful learners (n=8) showed faster response times for stimuli presented only to the left ear than for those presented only to the right. The development of a left-ear/right-hemisphere advantage for processing a prototypically phonetic cue supports a model of speech perception in which lateralization is driven by functional demands (talker identification vs. phonetic categorization) rather than by acoustic stimulus properties alone.
Effects of prosodically modulated sub-phonetic variation on lexical competition.
Salverda, Anne Pier; Dahan, Delphine; Tanenhaus, Michael K; Crosswhite, Katherine; Masharov, Mikhail; McDonough, Joyce
2007-11-01
Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
Speech recognition: Acoustic phonetic and lexical knowledge representation
NASA Astrophysics Data System (ADS)
Zue, V. W.
1983-02-01
The purpose of this program is to develop a speech data base facility under which the acoustic characteristics of speech sounds in various contexts can be studied conveniently; investigate the phonological properties of a large lexicon of, say 10,000 words, and determine to what extent the phontactic constraints can be utilized in speech recognition; study the acoustic cues that are used to mark work boundaries; develop a test bed in the form of a large-vocabulary, IWR system to study the interactions of acoustic, phonetic and lexical knowledge; and develop a limited continuous speech recognition system with the goal of recognizing any English word from its spelling in order to assess the interactions of higher-level knowledge sources.
NASA Astrophysics Data System (ADS)
Several articles addressing topics in speech research are presented. The topics include: exploring the functional significance of physiological tremor: A biospectroscopic approach; differences between experienced and inexperienced listeners to deaf speech; a language-oriented view of reading and its disabilities; Phonetic factors in letter detection; categorical perception; Short-term recall by deaf signers of American sign language; a common basis for auditory sensory storage in perception and immediate memory; phonological awareness and verbal short-term memory; initiation versus execution time during manual and oral counting by stutterers; trading relations in the perception of speech by five-year-old children; the role of the strap muscles in pitch lowering; phonetic validation of distinctive features; consonants and syllable boundaires; and vowel information in postvocalic frictions.
Amengual, Mark
2016-01-01
The present study examines cognate effects in the phonetic production and processing of the Catalan back mid-vowel contrast (/o/-/ɔ/) by 24 early and highly proficient Spanish-Catalan bilinguals in Majorca (Spain). Participants completed a picture-naming task and a forced-choice lexical decision task in which they were presented with either words (e.g., /bɔsk/ "forest") or non-words based on real words, but with the alternate mid-vowel pair in stressed position ((*)/bosk/). The same cognate and non-cognate lexical items were included in the production and lexical decision experiments. The results indicate that even though these early bilinguals maintained the back mid-vowel contrast in their productions, they had great difficulties identifying non-words and real words based on the identity of the Catalan mid-vowel. The analyses revealed language dominance and cognate effects: Spanish-dominants exhibited higher error rates than Catalan-dominants, and production and lexical decision accuracy were also affected by cognate status. The present study contributes to the discussion of the organization of early bilinguals' dominant and non-dominant sound systems, and proposes that exemplar theoretic approaches can be extended to include bilingual lexical connections that account for the interactions between the phonetic and lexical levels of early bilingual individuals.
Speech recognition: Acoustic-phonetic knowledge acquisition and representation
NASA Astrophysics Data System (ADS)
Zue, Victor W.
1988-09-01
The long-term research goal is to develop and implement speaker-independent continuous speech recognition systems. It is believed that the proper utilization of speech-specific knowledge is essential for such advanced systems. This research is thus directed toward the acquisition, quantification, and representation, of acoustic-phonetic and lexical knowledge, and the application of this knowledge to speech recognition algorithms. In addition, we are exploring new speech recognition alternatives based on artificial intelligence and connectionist techniques. We developed a statistical model for predicting the acoustic realization of stop consonants in various positions in the syllable template. A unification-based grammatical formalism was developed for incorporating this model into the lexical access algorithm. We provided an information-theoretic justification for the hierarchical structure of the syllable template. We analyzed segmented duration for vowels and fricatives in continuous speech. Based on contextual information, we developed durational models for vowels and fricatives that account for over 70 percent of the variance, using data from multiple, unknown speakers. We rigorously evaluated the ability of human spectrogram readers to identify stop consonants spoken by many talkers and in a variety of phonetic contexts. Incorporating the declarative knowledge used by the readers, we developed a knowledge-based system for stop identification. We achieved comparable system performance to that to the readers.
Automatic detection of articulation disorders in children with cleft lip and palate.
Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria
2009-11-01
Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.
Jacewicz, Ewa; Fox, Robert Allen
2015-01-01
Purpose To investigate how linguistic knowledge interacts with indexical knowledge in older children's perception under demanding listening conditions created by extensive talker variability. Method Twenty five 9- to 12-year-old children, 12 from North Carolina (NC) and 13 from Wisconsin (WI), identified 12 vowels in isolated hVd-words produced by 120 talkers representing the two dialects (NC and WI), both genders and three age groups (generations) of residents from the same geographic locations as the listeners. Results Identification rates were higher for responses to talkers from the same dialect as the listeners and for female speech. Listeners were sensitive to systematic positional variations in vowels and their dynamic structure (formant movement) associated with generational differences in vowel pronunciation resulting from sound change in a speech community. Overall identification rate was 71.7%, which is 8.5% lower than for the adults responding to the same stimuli in Jacewicz and Fox (2012). Conclusions Typically developing older children are successful in dealing with both phonetic and indexical variation related to talker dialect, gender and generation. They are less consistent than the adults most likely due to their less efficient encoding of acoustic-phonetic information in the speech of multiple talkers and relative inexperience with indexical variation. PMID:24686520
NASA Astrophysics Data System (ADS)
Liberman, A. M.
1983-09-01
This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. Manuscripts cover the following topics: The association between comprehension of spoken sentences and early reading ability: The role of phonetic representation; Phonetic coding and order memory in relation to reading proficiency: A comparison of short-term memory for temporal and spatial order information; Exploring the oral and written language errors made by language disabled children; Perceiving phonetic events; Converging evidence in support of common dynamical principles for speech and movement coordination; Phase transitions and critical behavior in human bimanual coordination; Timing and coarticulation for alveolo-palatals and sequences of alveolar +J in Catalan; V-to-C coarticulation in Catalan VCV sequences: An articulatory and acoustical study; Prosody and the /S/-/c/ distinction; Intersections of tone and intonation in Thai; Simultaneous measurements of vowels produced by a hearing-impaired speaker; Extending format transitions may not improve aphasics' perception of stop consonant place of articulation; Against a role of chirp identification in duplex perception; Further evidence for the role of relative timing in speech: A reply to Barry; Review (Phonological intervention: Concepts and procedures); and Review (Temporal variables in speech).
Typological Asymmetries in Round Vowel Harmony: Support from Artificial Grammar Learning
Finley, Sara
2012-01-01
Providing evidence for the universal tendencies of patterns in the world’s languages can be difficult, as it is impossible to sample all possible languages, and linguistic samples are subject to interpretation. However, experimental techniques such as artificial grammar learning paradigms make it possible to uncover the psychological reality of claimed universal tendencies. This paper addresses learning of phonological patterns (systematic tendencies in the sounds in language). Specifically, I explore the role of phonetic grounding in learning round harmony, a phonological process in which words must contain either all round vowels ([o, u]) or all unround vowels ([i, e]). The phonetic precursors to round harmony are such that mid vowels ([o, e]), which receive the greatest perceptual benefit from harmony, are most likely to trigger harmony. High vowels ([i, u]), however, are cross-linguistically less likely to trigger round harmony. Adult participants were exposed to a miniature language that contained a round harmony pattern in which the harmony source triggers were either high vowels ([i, u]) (poor harmony source triggers) or mid vowels ([o, e]) (ideal harmony source triggers). Only participants who were exposed to the ideal mid vowel harmony source triggers were successfully able to generalize the harmony pattern to novel instances, suggesting that perception and phonetic naturalness play a role in learning. PMID:23264713
Shi, Lu-Feng; Koenig, Laura L
2016-09-01
Nonnative listeners have difficulty recognizing English words due to underdeveloped acoustic-phonetic and/or lexical skills. The present study used Boothroyd and Nittrouer's (1988)j factor to tease apart these two components of word recognition. Participants included 15 native English and 29 native Russian listeners. Fourteen and 15 of the Russian listeners reported English (ED) and Russian (RD) to be their dominant language, respectively. Listeners were presented 119 consonant-vowel-consonant real and nonsense words in speech-spectrum noise at +6 dB SNR. Responses were scored for word and phoneme recognition, the logarithmic quotient of which yielded j. Word and phoneme recognition was comparable between native and ED listeners but poorer in RD listeners. Analysis of j indicated less effective use of lexical information in RD than in native and ED listeners. Lexical processing was strongly correlated with the length of residence in the United States. Language background is important for nonnative word recognition. Lexical skills can be regarded as nativelike in ED nonnative listeners. Compromised word recognition in ED listeners is unlikely a result of poor lexical processing. Performance should be interpreted with caution for listeners dominant in their first language, whose word recognition is affected by both lexical and acoustic-phonetic factors.
[Children with specific language impairment: electrophysiological and pedaudiological findings].
Rinker, T; Hartmann, K; Smith, E; Reiter, R; Alku, P; Kiefer, M; Brosch, S
2014-08-01
Auditory deficits may be at the core of the language delay in children with Specific Language Impairment (SLI). It was therefore hypothesized that children with SLI perform poorly on 4 tests typically used to diagnose central auditory processing disorder (CAPD) as well in the processing of phonetic and tone stimuli in an electrophysiological experiment. 14 children with SLI (mean age 61,7 months) and 16 children without SLI (mean age 64,9 months) were tested with 4 tasks: non-word repetition, language discrimination in noise, directional hearing, and dichotic listening. The electrophysiological recording Mismatch Negativity (MMN) employed sine tones (600 vs. 650 Hz) and phonetic stimuli (/ε/ versus /e/). Control children and children with SLI differed significantly in the non-word repetition as well as in the dichotic listening task but not in the two other tasks. Only the control children recognized the frequency difference in the MMN-experiment. The phonetic difference was discriminated by both groups, however, effects were longer lasting for the control children. Group differences were not significant. Children with SLI show limitations in auditory processing that involve either a complex task repeating unfamiliar or difficult material and show subtle deficits in auditory processing at the neural level. © Georg Thieme Verlag KG Stuttgart · New York.
Colin, C; Radeau, M; Soquet, A; Demolin, D; Colin, F; Deltenre, P
2002-04-01
The McGurk-MacDonald illusory percept is obtained by dubbing an incongruent articulatory movement on an auditory phoneme. This type of audiovisual speech perception contributes to the assessment of theories of speech perception. The mismatch negativity (MMN) reflects the detection of a deviant stimulus within the auditory short-term memory and besides an acoustic component, possesses, under certain conditions, a phonetic one. The present study assessed the existence of an MMN evoked by McGurk-MacDonald percepts elicited by audiovisual stimuli with constant auditory components. Cortical evoked potentials were recorded using the oddball paradigm on 8 adults in 3 experimental conditions: auditory alone, visual alone and audiovisual stimulation. Obtaining illusory percepts was confirmed in an additional psychophysical condition. The auditory deviant syllables and the audiovisual incongruent syllables elicited a significant MMN at Fz. In the visual condition, no negativity was observed either at Fz, or at O(z). An MMN can be evoked by visual articulatory deviants, provided they are presented in a suitable auditory context leading to a phonetically significant interaction. The recording of an MMN elicited by illusory McGurk percepts suggests that audiovisual integration mechanisms in speech take place rather early during the perceptual processes.
Schertz, Jessamyn; Cho, Taehong; Lotto, Andrew; Warner, Natasha
2015-01-01
The current work examines native Korean speakers’ perception and production of stop contrasts in their native language (L1, Korean) and second language (L2, English), focusing on three acoustic dimensions that are all used, albeit to different extents, in both languages: voice onset time (VOT), f0 at vowel onset, and closure duration. Participants used all three cues to distinguish the L1 Korean three-way stop distinction in both production and perception. Speakers’ productions of the L2 English contrasts were reliably distinguished using both VOT and f0 (even though f0 is only a very weak cue to the English contrast), and, to a lesser extent, closure duration. In contrast to the relative homogeneity of the L2 productions, group patterns on a forced-choice perception task were less clear-cut, due to considerable individual differences in perceptual categorization strategies, with listeners using either primarily VOT duration, primarily f0, or both dimensions equally to distinguish the L2 English contrast. Differences in perception, which were stable across experimental sessions, were not predicted by individual variation in production patterns. This work suggests that reliance on multiple cues in representation of a phonetic contrast can form the basis for distinct individual cue-weighting strategies in phonetic categorization. PMID:26644630
Tone Attrition in Mandarin Speakers of Varying English Proficiency
Creel, Sarah C.
2017-01-01
Purpose The purpose of this study was to determine whether the degree of dominance of Mandarin–English bilinguals' languages affects phonetic processing of tone content in their native language, Mandarin. Method We tested 72 Mandarin–English bilingual college students with a range of language-dominance profiles in the 2 languages and ages of acquisition of English. Participants viewed 2 photographs at a time while hearing a familiar Mandarin word referring to 1 photograph. The names of the 2 photographs diverged in tone, vowels, or both. Word recognition was evaluated using clicking accuracy, reaction times, and an online recognition measure (gaze) and was compared in the 3 conditions. Results Relative proficiency in English was correlated with reduced word recognition success in tone-disambiguated trials, but not in vowel-disambiguated trials, across all 3 dependent measures. This selective attrition for tone content emerged even though all bilinguals had learned Mandarin from birth. Lengthy experience with English thus weakened tone use. Conclusions This finding has implications for the question of the extent to which bilinguals' 2 phonetic systems interact. It suggests that bilinguals may not process pitch information language-specifically and that processing strategies from the dominant language may affect phonetic processing in the nondominant language—even when the latter was learned natively. PMID:28124064
Late positive slow waves as markers of chunking during encoding
Nogueira, Ana M. L.; Bueno, Orlando F. A.; Manzano, Gilberto M.; Kohn, André F.; Pompéia, Sabine
2015-01-01
Electrophysiological markers of chunking of words during encoding have mostly been shown in studies that present pairs of related stimuli. In these cases it is difficult to disentangle cognitive processes that reflect distinctiveness (i.e., conspicuous items because they are related), perceived association between related items and unified representations of various items, or chunking. Here, we propose a paradigm that enables the determination of a separate Event-related Potential (ERP) marker of these cognitive processes using sequentially related word triads. Twenty-three young healthy individuals viewed 80 15-word lists composed of unrelated items except for the three words in the middle serial positions (triads), which could be either unrelated (control list), related perceptually, phonetically or semantically. ERP amplitudes were measured at encoding of each one of the words in the triads. We analyzed two latency intervals (350–400 and 400–800 ms) at midline locations. Behaviorally, we observed a progressive facilitation in the immediate free recall of the words in the triads depending on the relations between their items (control < perceptual < phonetic < semantic), but only semantically related items were recalled as chunks. P300-like deflections were observed for perceptually deviant stimuli. A reduction of amplitude of a component akin to the N400 was found for words that were phonetically and semantically associated with prior items and therefore were not associated to chunking. Positive slow wave (PSW) amplitudes increased as successive phonetically and semantically related items were presented, but they were observed earlier and were more prominent at Fz for semantic associates. PSWs at Fz and Cz also correlated with recall of semantic word chunks. This confirms prior claims that PSWs at Fz are potential markers of chunking which, in the proposed paradigm, were modulated differently from the detection of deviant stimuli and of relations between stimuli. PMID:26283984
Overall intelligibility, articulation, resonance, voice and language in a child with Nager syndrome.
Van Lierde, Kristiane M; Luyten, Anke; Mortier, Geert; Tijskens, Anouk; Bettens, Kim; Vermeersch, Hubert
2011-02-01
The purpose of this study was to provide a description of the language and speech (intelligibility, voice, resonance, articulation) in a 7-year-old Dutch speaking boy with Nager syndrome. To reveal these features comparison was made with an age and gender related child with a similar palatal or hearing problem. Language was tested with an age appropriate language test namely the Dutch version of the Clinical Evaluation of Language Fundamentals. Regarding articulation a phonetic inventory, phonetic analysis and phonological process analysis was performed. A nominal scale with four categories was used to judge the overall speech intelligibility. A voice and resonance assessment included a videolaryngostroboscopy, a perceptual evaluation, acoustic analysis and nasometry. The most striking communication problems in this child were expressive and receptive language delay, moderately impaired speech intelligibility, the presence of phonetic and phonological disorders, resonance disorders and a high-pitched voice. The explanation for this pattern of communication is not completely straightforward. The language and the phonological impairment, only present in the child with the Nager syndrome, are not part of a more general developmental delay. The resonance disorders can be related to the cleft palate, but were not present in the child with the isolated cleft palate. One might assume that the cul-de-sac resonance and the much decreased mandibular movement and the restricted tongue lifting are caused by the restricted jaw mobility and micrognathia. To what extent the suggested mandibular distraction osteogenesis in early childhood allows increased mandibular movement and better speech outcome with increased oral resonance is subject for further research. According to the results of this study the speech and language management must be focused on receptive and expressive language skills and linguistic conceptualization, correct phonetic placement and the modification of hypernasality and nasal emission. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Turker, Sabrina; Reiterer, Susanne M.; Seither-Preisler, Annemarie; Schneider, Peter
2017-01-01
Recent research has shown that the morphology of certain brain regions may indeed correlate with a number of cognitive skills such as musicality or language ability. The main aim of the present study was to explore the extent to which foreign language aptitude, in particular phonetic coding ability, is influenced by the morphology of Heschl’s gyrus (HG; auditory cortex), working memory capacity, and musical ability. In this study, the auditory cortices of German-speaking individuals (N = 30; 13 males/17 females; aged 20–40 years) with high and low scores in a number of language aptitude tests were compared. The subjects’ language aptitude was measured by three different tests, namely a Hindi speech imitation task (phonetic coding ability), an English pronunciation assessment, and the Modern Language Aptitude Test (MLAT). Furthermore, working memory capacity and musical ability were assessed to reveal their relationship with foreign language aptitude. On the behavioral level, significant correlations were found between phonetic coding ability, English pronunciation skills, musical experience, and language aptitude as measured by the MLAT. Parts of all three tests measuring language aptitude correlated positively and significantly with each other, supporting their validity for measuring components of language aptitude. Remarkably, the number of instruments played by subjects showed significant correlations with all language aptitude measures and musicality, whereas, the number of foreign languages did not show any correlations. With regard to the neuroanatomy of auditory cortex, adults with very high scores in the Hindi testing and the musicality test (AMMA) demonstrated a clear predominance of complete posterior HG duplications in the right hemisphere. This may reignite the discussion of the importance of the right hemisphere for language processing, especially when linked or common resources are involved, such as the inter-dependency between phonetic and musical aptitude. PMID:29250017
Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech.
Khalighinejad, Bahar; Cruzatto da Silva, Guilherme; Mesgarani, Nima
2017-02-22
Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders. SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for dynamic processing of speech sounds in the auditory pathway. Copyright © 2017 Khalighinejad et al.
Flaherty, Mary; Dent, Micheal L.; Sawusch, James R.
2017-01-01
The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with “d” or “t” and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal. PMID:28562597
Flaherty, Mary; Dent, Micheal L; Sawusch, James R
2017-01-01
The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.
NASA Astrophysics Data System (ADS)
Whiteside, Sandra P.; Henry, Luisa; Dobbin, Rachel
2004-08-01
Voice onset time (VOT) data for the plosives /p b t d k g/ in two vowel contexts (eye opena) for 5 groups of 46 boys and girls aged 5; 8 (5 years, 8 months) to 13;2 years were investigated to examine patterns of sex differences. Results indicated that there was some evidence of females displaying longer VOT values than the males. In addition, these were found to be most marked for the data of the 13;2-year olds. Furthermore, the sex differences in the VOT values displayed phonetic context effects. For example, the greatest sex differences were observed for the voiceless plosives, and within the context of the vowel /i/.
Coordination and interpretation of vocal and visible resources: 'trail-off' conjunctions.
Walker, Gareth
2012-03-01
The empirical focus of this paper is a conversational turn-taking phenomenon in which conjunctions produced immediately after a point of possible syntactic and pragmatic completion are treated by co-participants as points of possible completion and transition relevance. The data for this study are audio-video recordings of 5 unscripted face-to-face interactions involving native speakers of US English, yielding 28 'trail-off' conjunctions. Detailed sequential analysis of talk is combined with analysis of visible features (including gaze, posture, gesture and involvement with material objects) and technical phonetic analysis. A range of phonetic and visible features are shown to regularly co-occur in the production of 'trail-off' conjunctions. These features distinguish them from other conjunctions followed by the cessation of talk.
Effects of prosodically-modulated sub-phonetic variation on lexical competition
Salverda, Anne Pier; Dahan, Delphine; Tanenhaus, Michael K.; Crosswhite, Katherine; Masharov, Mikhail; McDonough, Joyce
2007-01-01
Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically-conditioned phonetic variation. PMID:17141751
Analysis of speech sounds is left-hemisphere predominant at 100-150ms after sound onset.
Rinne, T; Alho, K; Alku, P; Holi, M; Sinkkonen, J; Virtanen, J; Bertrand, O; Näätänen, R
1999-04-06
Hemispheric specialization of human speech processing has been found in brain imaging studies using fMRI and PET. Due to the restricted time resolution, these methods cannot, however, determine the stage of auditory processing at which this specialization first emerges. We used a dense electrode array covering the whole scalp to record the mismatch negativity (MMN), an event-related brain potential (ERP) automatically elicited by occasional changes in sounds, which ranged from non-phonetic (tones) to phonetic (vowels). MMN can be used to probe auditory central processing on a millisecond scale with no attention-dependent task requirements. Our results indicate that speech processing occurs predominantly in the left hemisphere at the early, pre-attentive level of auditory analysis.
Identification of emotional intonation evaluated by fMRI.
Wildgruber, D; Riecker, A; Hertrich, I; Erb, M; Grodd, W; Ethofer, T; Ackermann, H
2005-02-15
During acoustic communication among human beings, emotional information can be expressed both by the propositional content of verbal utterances and by the modulation of speech melody (affective prosody). It is well established that linguistic processing is bound predominantly to the left hemisphere of the brain. By contrast, the encoding of emotional intonation has been assumed to depend specifically upon right-sided cerebral structures. However, prior clinical and functional imaging studies yielded discrepant data with respect to interhemispheric lateralization and intrahemispheric localization of brain regions contributing to processing of affective prosody. In order to delineate the cerebral network engaged in the perception of emotional tone, functional magnetic resonance imaging (fMRI) was performed during recognition of prosodic expressions of five different basic emotions (happy, sad, angry, fearful, and disgusted) and during phonetic monitoring of the same stimuli. As compared to baseline at rest, both tasks yielded widespread bilateral hemodynamic responses within frontal, temporal, and parietal areas, the thalamus, and the cerebellum. A comparison of the respective activation maps, however, revealed comprehension of affective prosody to be bound to a distinct right-hemisphere pattern of activation, encompassing posterior superior temporal sulcus (Brodmann Area [BA] 22), dorsolateral (BA 44/45), and orbitobasal (BA 47) frontal areas. Activation within left-sided speech areas, in contrast, was observed during the phonetic task. These findings indicate that partially distinct cerebral networks subserve processing of phonetic and intonational information during speech perception.
Talker variability in audio-visual speech perception
Heald, Shannon L. M.; Nusbaum, Howard C.
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919
Talker variability in audio-visual speech perception.
Heald, Shannon L M; Nusbaum, Howard C
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.
Articulatory characteristics of Hungarian ‘transparent’ vowels
Benus, Stefan; Gafos, Adamantios I.
2007-01-01
Using a combination of magnetometry and ultrasound, we examined the articulatory characteristics of the so-called ‘transparent’ vowels [iː], [i], and [eː] in Hungarian vowel harmony. Phonologically, transparent vowels are front, but they can be followed by either front or back suffixes. However, a finer look reveals an underlying phonetic coherence in two respects. First, transparent vowels in back harmony contexts show a less advanced (more retracted) tongue body posture than phonemically identical vowels in front harmony contexts: e.g. [i] in buli-val is less advanced than [i] in bili-vel. Second, transparent vowels in monosyllabic stems selecting back suffixes are also less advanced than phonemically identical vowels in stems selecting front suffixes: e.g. [iː] in ír, taking back suffixes, compared to [iː] of hír, taking front suffixes, is less advanced when these stems are produced in bare form (no suffixes). We thus argue that the phonetic degree of tongue body horizontal position correlates with the phonological alternation in suffixes. A hypothesis that emerges from this work is that a plausible phonetic basis for transparency can be found in quantal characteristics of the relation between articulation and acoustics of transparent vowels. More broadly, the proposal is that the phonology of transparent vowels is better understood when their phonological patterning is studied together with their articulatory and acoustic characteristics. PMID:18389086
Amengual, Mark
2016-01-01
The present study examines cognate effects in the phonetic production and processing of the Catalan back mid-vowel contrast (/o/-/ɔ/) by 24 early and highly proficient Spanish-Catalan bilinguals in Majorca (Spain). Participants completed a picture-naming task and a forced-choice lexical decision task in which they were presented with either words (e.g., /bɔsk/ “forest”) or non-words based on real words, but with the alternate mid-vowel pair in stressed position (*/bosk/). The same cognate and non-cognate lexical items were included in the production and lexical decision experiments. The results indicate that even though these early bilinguals maintained the back mid-vowel contrast in their productions, they had great difficulties identifying non-words and real words based on the identity of the Catalan mid-vowel. The analyses revealed language dominance and cognate effects: Spanish-dominants exhibited higher error rates than Catalan-dominants, and production and lexical decision accuracy were also affected by cognate status. The present study contributes to the discussion of the organization of early bilinguals' dominant and non-dominant sound systems, and proposes that exemplar theoretic approaches can be extended to include bilingual lexical connections that account for the interactions between the phonetic and lexical levels of early bilingual individuals. PMID:27199849
SO, CONNIE K.; BEST, CATHERINE T.
2010-01-01
This study examined the perception of the four Mandarin lexical tones by Mandarin-naïve Hong Kong Cantonese, Japanese, and Canadian English listener groups. Their performance on an identification task, following a brief familiarization task, was analyzed in terms of tonal sensitivities (A-prime scores on correct identifications) and tonal errors (confusions). The A-prime results revealed that the English listeners' sensitivity to Tone 4 identifications specifically was significantly lower than that of the other two groups. The analysis of tonal errors revealed that all listener groups showed perceptual confusion of tone pairs with similar phonetic features (T1–T2, T1–T4 and T2–T3 pairs), but not of those with completely dissimilar features (T1–T3, T2–T4, and T3–T4). Language specific errors were also observed in their performance, which may be explained within the framework of the Perceptual Assimilation Model (PAM: Best, 1995; Best & Tyler, 2007). The findings imply that linguistic experience with native tones does not necessarily facilitate non-native tone perception. Rather, the phonemic status and the phonetic features (similarities or dissimilarities) between the tonal systems of the target language and the listeners' native languages play critical roles in the perception of non-native tones. PMID:20583732
Mo, Lei
2017-01-01
The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers’ speech-specific capabilities, rather than the perceivers’ psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants’ ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants’ acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery. PMID:29176886
Lin, Yi; Fan, Ruolin; Mo, Lei
2017-01-01
The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers' speech-specific capabilities, rather than the perceivers' psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants' ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants' acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery.
Adaptation to novel accents by toddlers
White, Katherine S.; Aslin, Richard N.
2010-01-01
Word recognition is a balancing act: listeners must be sensitive to phonetic detail to avoid confusing similar words, yet, at the same time, be flexible enough to adapt to phonetically variable pronunciations, such as those produced by speakers of different dialects or by non-native speakers. Recent work has demonstrated that young toddlers are sensitive to phonetic detail during word recognition; pronunciations that deviate from the typical phonological form lead to a disruption of processing. However, it is not known whether young word learners show the flexibility that is characteristic of adult word recognition. The present study explores whether toddlers can adapt to artificial accents in which there is a vowel category shift with respect to the native language. 18–20-month-olds heard mispronunciations of familiar words (e.g., vowels were shifted from [a] to [æ]: “dog” pronounced as “dag”). In test, toddlers were tolerant of mispronunciations if they had recently been exposed to the same vowel shift, but not if they had been exposed to standard pronunciations or other vowel shifts. The effects extended beyond particular items heard in exposure to words sharing the same vowels. These results indicate that, like adults, toddlers show flexibility in their interpretation of phonological detail. Moreover, they suggest that effects of top-down knowledge on the reinterpretation of phonological detail generalize across the phono-lexical system. PMID:21479106
The speech perception skills of children with and without speech sound disorder.
Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie
To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.
Cross-language comparisons of contextual variation in the production and perception of vowels
NASA Astrophysics Data System (ADS)
Strange, Winifred
2005-04-01
In the last two decades, a considerable amount of research has investigated second-language (L2) learners problems with perception and production of non-native vowels. Most studies have been conducted using stimuli in which the vowels are produced and presented in simple, citation-form (lists) monosyllabic or disyllabic utterances. In my laboratory, we have investigated the spectral (static/dynamic formant patterns) and temporal (syllable duration) variation in vowel productions as a function of speech-style (list/sentence utterances), speaking rate (normal/rapid), sentence focus (narrow focus/post-focus) and phonetic context (voicing/place of surrounding consonants). Data will be presented for a set of languages that include large and small vowel inventories, stress-, syllable-, and mora-timed prosody, and that vary in the phonological/phonetic function of vowel length, diphthongization, and palatalization. Results show language-specific patterns of contextual variation that affect the cross-language acoustic similarity of vowels. Research on cross-language patterns of perceived phonetic similarity by naive listeners suggests that listener's knowledge of native language (L1) patterns of contextual variation influences their L1/L2 similarity judgments and subsequently, their discrimination of L2 contrasts. Implications of these findings for assessing L2 learners perception of vowels and for developing laboratory training procedures to improve L2 vowel perception will be discussed. [Work supported by NIDCD.
Wada, Junichiro; Hideshima, Masayuki; Inukai, Shusuke; Matsuura, Hiroshi; Wakabayashi, Noriyuki
2014-01-01
To investigate the effects of the width and cross-sectional shape of the major connectors of maxillary dentures located in the middle area of the palate on the accuracy of phonetic output of consonants using an originally developed speech recognition system. Nine adults (4 males and 5 females, aged 24-26 years) with sound dentition were recruited. The following six sounds were considered: [∫i], [t∫i], [ɾi], [ni], [çi], and [ki]. The experimental connectors were fabricated to simulate bars (narrow, 8-mm width) and plates (wide, 20-mm width). Two types of cross-sectional shapes in the sagittal plane were specified: flat and plump edge. The appearance ratio of phonetic segment labels was calculated with the speech recognition system to indicate the accuracy of phonetic output. Statistical analysis was conducted using one-way ANOVA and Tukey's test. The mean appearance ratio of correct labels (MARC) significantly decreased for [ni] with the plump edge (narrow connector) and for [ki] with both the flat and plump edge (wide connectors). For [çi], the MARCs tended to be lower with flat plates. There were no significant differences for the other consonants. The width and cross-sectional shape of the connectors had limited effects on the articulation of consonants at the palate. © 2015 S. Karger AG, Basel.
The role of linguistic experience in the processing of probabilistic information in production.
Gustafson, Erin; Goldrick, Matthew
2018-01-01
Speakers track the probability that a word will occur in a particular context and utilize this information during phonetic processing. For example, content words that have high probability within a discourse tend to be realized with reduced acoustic/articulatory properties. Such probabilistic information may influence L1 and L2 speech processing in distinct ways (reflecting differences in linguistic experience across groups and the overall difficulty of L2 speech processing). To examine this issue, L1 and L2 speakers performed a referential communication task, describing sequences of simple actions. The two groups of speakers showed similar effects of discourse-dependent probabilistic information on production, suggesting that L2 speakers can successfully track discourse-dependent probabilities and use such information to modulate phonetic processing.
Measures of native and non-native rhythm in a quantity language.
Stockmal, Verna; Markus, Dace; Bond, Dzintra
2005-01-01
The traditional phonetic classification of language rhythm as stress-timed or syllable-timed is attributed to Pike. Recently, two different proposals have been offered for describing the rhythmic structure of languages from acoustic-phonetic measurements. Ramus has suggested a metric based on the proportion of vocalic intervals and the variability (SD) of consonantal intervals. Grabe has proposed Pairwise Variability Indices (nPVI, rPVI) calculated from the differences in vocalic and consonantal durations between successive syllables. We have calculated both the Ramus and Grabe metrics for Latvian, traditionally considered a syllable rhythm language, and for Latvian as spoken by Russian learners. Native speakers and proficient learners were very similar whereas low-proficiency learners showed high variability on some properties. The metrics did not provide an unambiguous classification of Latvian.
Stepanov, Arthur; Pavlič, Matic; Stateva, Penka; Reboul, Anne
2018-01-01
This study investigated whether early bilingualism and early musical training positively influence the ability to discriminate between prosodic patterns corresponding to different syntactic structures in otherwise phonetically identical sentences in an unknown language. In a same-different discrimination task, participants (N = 108) divided into four groups (monolingual non-musicians, monolingual musicians, bilingual non-musicians, and bilingual musicians) listened to pairs of short sentences in a language unknown to them (French). In discriminating phonetically identical but prosodically different sentences, musicians, bilinguals, and bilingual musicians outperformed the controls. However, there was no interaction between bilingualism and musical training to suggest an additive effect. These results underscore the significant role of both types of experience in enhancing the listeners' sensitivity to prosodic information.
The longitudinal development of fine phonetic detail in late learners of Spanish
NASA Astrophysics Data System (ADS)
Casillas, Joseph Vincent
The present investigation analyzed early second language (L2) learning in adults. A common finding regarding L2 acquisition is that early learning appears to be necessary in order to perform on the same level as a native speaker. Surprisingly, many current theoretical models posit that the human ability to learn novel speech sounds remains active throughout the lifespan. In light of this fact, this project examines L2 acquisition in late learners with a special focus on L1/L2 use, input, and context of learning. Research regarding L1/L2 use has tended to be observational, and throughout the previous six decades of L2 research the role of input has been minimized and left largely unexplained. This study includes two production experiments and two perception experiments and focuses on the role of L1/L2 use and input in L2 acquisition in late learners in order to add to current research regarding their role in accurately and efficiently acquiring a novel speech sound. Moreover, this research is concerned with shedding light on when, if at all, during the acquisition process late learners begin to acquire a new, language-specific phonetic system, and the amount of exposure necessary in order to acquire L2 fine-phonetic detail. The experimental design presented in the present study also aims to shed light on the temporal relationship between production and perception with regard to category formation. To begin to fully understand these issues, the present study proposes a battery of tasks which were administered throughout the course of a domestic immersion program. Domestic immersion provides an understudied linguistic context in which L1 use is minimized, target language use is maximized, and L2 input is abundant. The results suggest that L2 phonetic category formation occurs at an early stage of development, and is perceptually driven. Moreover, early L2 representations are fragile, and especially susceptible to cross-language interference. Together, the studies undertaken for this work add to our understanding of the initial stages of the acquisition of L2 phonology in adult learners.
ERIC Educational Resources Information Center
Hwang, Shin Ja J., Ed.; Lommel, Arle R., Ed.
Papers from the conference include: "English and Human Morphology: 'Naturalness' in Counting and Measuring" (Sullivan); "Phonetic and Phonemic Change Revisited" (Lockwood); "Virtual Reality" (Langacker); "Path Directions in ASL Agreement Verbs are Predictable on Semantic Grounds" (Taub); "Temporal…
Masking release due to linguistic and phonetic dissimilarity between the target and masker speech
Calandruccio, Lauren; Brouwer, Susanne; Van Engen, Kristin J.; Dhar, Sumitrajit; Bradlow, Ann R.
2013-01-01
Purpose To investigate masking release for speech maskers for linguistically and phonetically close (English and Dutch) and distant (English and Mandarin) language pairs. Method Twenty monolingual speakers of English with normal-audiometric thresholds participated. Data are reported for an English sentence recognition task in English, Dutch and Mandarin competing speech maskers (Experiment I) and noise maskers (Experiment II) that were matched either to the long-term-average-speech spectra or to the temporal modulations of the speech maskers from Experiment I. Results Results indicated that listener performance increased as the target-to-masker linguistic distance increased (English-in-English < English-in-Dutch < English-in-Mandarin). Conclusions Spectral differences between maskers can account for some, but not all, of the variation in performance between maskers; however, temporal differences did not seem to play a significant role. PMID:23800811
Broad phonetic class definition driven by phone confusions
NASA Astrophysics Data System (ADS)
Lopes, Carla; Perdigão, Fernando
2012-12-01
Intermediate representations between the speech signal and phones may be used to improve discrimination among phones that are often confused. These representations are usually found according to broad phonetic classes, which are defined by a phonetician. This article proposes an alternative data-driven method to generate these classes. Phone confusion information from the analysis of the output of a phone recognition system is used to find clusters at high risk of mutual confusion. A metric is defined to compute the distance between phones. The results, using TIMIT data, show that the proposed confusion-driven phone clustering method is an attractive alternative to the approaches based on human knowledge. A hierarchical classification structure to improve phone recognition is also proposed using a discriminative weight training method. Experiments show improvements in phone recognition on the TIMIT database compared to a baseline system.
Neural correlates of phonetic convergence and speech imitation.
Garnier, Maëva; Lamalle, Laurent; Sato, Marc
2013-01-01
Speakers unconsciously tend to mimic their interlocutor's speech during communicative interaction. This study aims at examining the neural correlates of phonetic convergence and deliberate imitation, in order to explore whether imitation of phonetic features, deliberate, or unconscious, might reflect a sensory-motor recalibration process. Sixteen participants listened to vowels with pitch varying around the average pitch of their own voice, and then produced the identified vowels, while their speech was recorded and their brain activity was imaged using fMRI. Three degrees and types of imitation were compared (unconscious, deliberate, and inhibited) using a go-nogo paradigm, which enabled the comparison of brain activations during the whole imitation process, its active perception step, and its production. Speakers followed the pitch of voices they were exposed to, even unconsciously, without being instructed to do so. After being informed about this phenomenon, 14 participants were able to inhibit it, at least partially. The results of whole brain and ROI analyses support the fact that both deliberate and unconscious imitations are based on similar neural mechanisms and networks, involving regions of the dorsal stream, during both perception and production steps of the imitation process. While no significant difference in brain activation was found between unconscious and deliberate imitations, the degree of imitation, however, appears to be determined by processes occurring during the perception step. Four regions of the dorsal stream: bilateral auditory cortex, bilateral supramarginal gyrus (SMG), and left Wernicke's area, indeed showed an activity that correlated significantly with the degree of imitation during the perception step.
Taxamatch, an Algorithm for Near (‘Fuzzy’) Matching of Scientific Names in Taxonomic Databases
Rees, Tony
2014-01-01
Misspellings of organism scientific names create barriers to optimal storage and organization of biological data, reconciliation of data stored under different spelling variants of the same name, and appropriate responses from user queries to taxonomic data systems. This study presents an analysis of the nature of the problem from first principles, reviews some available algorithmic approaches, and describes Taxamatch, an improved name matching solution for this information domain. Taxamatch employs a custom Modified Damerau-Levenshtein Distance algorithm in tandem with a phonetic algorithm, together with a rule-based approach incorporating a suite of heuristic filters, to produce improved levels of recall, precision and execution time over the existing dynamic programming algorithms n-grams (as bigrams and trigrams) and standard edit distance. Although entirely phonetic methods are faster than Taxamatch, they are inferior in the area of recall since many real-world errors are non-phonetic in nature. Excellent performance of Taxamatch (as recall, precision and execution time) is demonstrated against a reference database of over 465,000 genus names and 1.6 million species names, as well as against a range of error types as present at both genus and species levels in three sets of sample data for species and four for genera alone. An ancillary authority matching component is included which can be used both for misspelled names and for otherwise matching names where the associated cited authorities are not identical. PMID:25247892
York Papers in Linguistics, 16.
ERIC Educational Resources Information Center
Harlow, S. J., Ed.; Warner, A. R., Ed.
Articles on diverse areas of linguistics include the following: "Correlative Constructions in Chinese" (Steve Harlow, Connie Cullen); "Xhosa Isinkalakahliso Again" (John Kelly); "Conversational Phonetics: Some Aspects of News Receipts in Everyday Talk" (John Local); "Parametric Interpretation in Yorktalk"…
ERIC Educational Resources Information Center
Moore, John
1980-01-01
Gives a brief description of the features of Esperanto: phonetic spelling, a regular grammar with no exceptions to rules, an international vocabulary with a rule for adding new words, and a word-building system making full use of affixes. (Author/MES)
Toutios, Asterios; Narayanan, Shrikanth S
2016-01-01
Real-time magnetic resonance imaging (rtMRI) of the moving vocal tract during running speech production is an important emerging tool for speech production research providing dynamic information of a speaker's upper airway from the entire mid-sagittal plane or any other scan plane of interest. There have been several advances in the development of speech rtMRI and corresponding analysis tools, and their application to domains such as phonetics and phonological theory, articulatory modeling, and speaker characterization. An important recent development has been the open release of a database that includes speech rtMRI data from five male and five female speakers of American English each producing 460 phonetically balanced sentences. The purpose of the present paper is to give an overview and outlook of the advances in rtMRI as a tool for speech research and technology development.
Dataglove measurement of joint angles in sign language handshapes
Eccarius, Petra; Bour, Rebecca; Scheidt, Robert A.
2012-01-01
In sign language research, we understand little about articulatory factors involved in shaping phonemic boundaries or the amount (and articulatory nature) of acceptable phonetic variation between handshapes. To date, there exists no comprehensive analysis of handshape based on the quantitative measurement of joint angles during sign production. The purpose of our work is to develop a methodology for collecting and visualizing quantitative handshape data in an attempt to better understand how handshapes are produced at a phonetic level. In this pursuit, we seek to quantify the flexion and abduction angles of the finger joints using a commercial data glove (CyberGlove; Immersion Inc.). We present calibration procedures used to convert raw glove signals into joint angles. We then implement those procedures and evaluate their ability to accurately predict joint angle. Finally, we provide examples of how our recording techniques might inform current research questions. PMID:23997644
Mulak, Karen E; Best, Catherine T; Tyler, Michael D; Kitamura, Christine; Irwin, Julia R
2013-01-01
By 12 months, children grasp that a phonetic change to a word can change its identity (phonological distinctiveness). However, they must also grasp that some phonetic changes do not (phonological constancy). To test development of phonological constancy, sixteen 15-month-olds and sixteen 19-month-olds completed an eye-tracking task that tracked their gaze to named versus unnamed images for familiar words spoken in their native (Australian) and an unfamiliar non-native (Jamaican) regional accent of English. Both groups looked longer at named than unnamed images for Australian pronunciations, but only 19-month-olds did so for Jamaican pronunciations, indicating that phonological constancy emerges by 19 months. Vocabulary size predicted 15-month-olds' identifications for the Jamaican pronunciations, suggesting vocabulary growth is a viable predictor for phonological constancy development. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing
Gow, David W.
2012-01-01
Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237
Alveolar and Velarized Laterals in Albanian and in the Viennese Dialect.
Moosmüller, Sylvia; Schmid, Carolin; Kasess, Christian H
2016-12-01
A comparison of alveolar and velarized lateral realizations in two language varieties, Albanian and the Viennese dialect, has been performed. Albanian distinguishes the two laterals phonemically, whereas in the Viennese dialect, the velarized lateral was introduced by language contact with Czech immigrants. A categorical distinction between the two lateral phonemes is fully maintained in Albanian. Results are not as straightforward in the Viennese dialect. Most prominently, female speakers, if at all, realize the velarized lateral in word-final position, thus indicating the application of a phonetically motivated process. The realization of the velarized lateral by male speakers, on the other hand, indicates that the velarized lateral replaced the former alveolar lateral phoneme. Alveolar laterals are either realized in perceptually salient positions, thus governed by an input-switch rule, or in front vowel contexts, thus subject to coarticulatory influences. Our results illustrate the subtle interplay of phonology, phonetics and sociolinguistics.
Baer-Henney, Dinah; Kügler, Frank; van de Vijver, Ruben
2015-09-01
Using the artificial language paradigm, we studied the acquisition of morphophonemic alternations with exceptions by 160 German adult learners. We tested the acquisition of two types of alternations in two regularity conditions while additionally varying length of training. In the first alternation, a vowel harmony, backness of the stem vowel determines backness of the suffix. This process is grounded in substance (phonetic motivation), and this universal phonetic factor bolsters learning a generalization. In the second alternation, tenseness of the stem vowel determines backness of the suffix vowel. This process is not based in substance, but it reflects a phonotactic property of German and our participants benefit from this language-specific factor. We found that learners use both cues, while substantive bias surfaces mainly in the most unstable situation. We show that language-specific and universal factors interact in learning. Copyright © 2014 Cognitive Science Society, Inc.
Mismatch Negativity with Visual-only and Audiovisual Speech
Ponton, Curtis W.; Bernstein, Lynne E.; Auer, Edward T.
2009-01-01
The functional organization of cortical speech processing is thought to be hierarchical, increasing in complexity and proceeding from primary sensory areas centrifugally. The current study used the mismatch negativity (MMN) obtained with electrophysiology (EEG) to investigate the early latency period of visual speech processing under both visual-only (VO) and audiovisual (AV) conditions. Current density reconstruction (CDR) methods were used to model the cortical MMN generator locations. MMNs were obtained with VO and AV speech stimuli at early latencies (approximately 82-87 ms peak in time waveforms relative to the acoustic onset) and in regions of the right lateral temporal and parietal cortices. Latencies were consistent with bottom-up processing of the visible stimuli. We suggest that a visual pathway extracts phonetic cues from visible speech, and that previously reported effects of AV speech in classical early auditory areas, given later reported latencies, could be attributable to modulatory feedback from visual phonetic processing. PMID:19404730
Speech perception and production in severe environments
NASA Astrophysics Data System (ADS)
Pisoni, David B.
1990-09-01
The goal was to acquire new knowledge about speech perception and production in severe environments such as high masking noise, increased cognitive load or sustained attentional demands. Changes were examined in speech production under these adverse conditions through acoustic analysis techniques. One set of studies focused on the effects of noise on speech production. The experiments in this group were designed to generate a database of speech obtained in noise and in quiet. A second set of experiments was designed to examine the effects of cognitive load on the acoustic-phonetic properties of speech. Talkers were required to carry out a demanding perceptual motor task while they read lists of test words. A final set of experiments explored the effects of vocal fatigue on the acoustic-phonetic properties of speech. Both cognitive load and vocal fatigue are present in many applications where speech recognition technology is used, yet their influence on speech production is poorly understood.
Clinical linguistics: its past, present and future.
Perkins, Michael R
2011-11-01
Historiography is a growing area of research within the discipline of linguistics, but so far the subfield of clinical linguistics has received virtually no systematic attention. This article attempts to rectify this by tracing the development of the discipline from its pre-scientific days up to the present time. As part of this, I include the results of a survey of articles published in Clinical Linguistics & Phonetics between 1987 and 2008 which shows, for example, a consistent primary focus on phonetics and phonology at the expense of grammar, semantics and pragmatics. I also trace the gradual broadening of the discipline from its roots in structural linguistics to its current reciprocal relationship with speech and language pathology and a range of other academic disciplines. Finally, I consider the scope of clinical linguistic research in 2011 and assess how the discipline seems likely develop in the future.
Van Lancker Sidtis, Diana; Cameron, Krista; Sidtis, John J.
2015-01-01
In motor speech disorders, dysarthric features impacting intelligibility, articulation, fluency, and voice emerge more saliently in conversation than in repetition, reading, or singing. A role of the basal ganglia in these task discrepancies has been identified. Further, more recent studies of naturalistic speech in basal ganglia dysfunction have revealed that formulaic language is more impaired than novel language. This descriptive study extends these observations to a case of severely dysfluent dysarthria due to a parkinsonian syndrome. Dysfluencies were quantified and compared for conversation, two forms of repetition, reading, recited speech, and singing. Other measures examined phonetic inventories, word forms, and formulaic language. Phonetic, syllabic, and lexical dysfluencies were more abundant in conversation than in other task conditions. Formulaic expressions in conversation were reduced compared to normal speakers. A proposed explanation supports the notion that the basal ganglia contribute to formulation of internal models for execution of speech. PMID:22774929
Speaker variability augments phonological processing in early word learning
Rost, Gwyneth C.; McMurray, Bob
2010-01-01
Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e., word pairs that differ by a single phoneme), despite the ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top-down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom-up acoustic-phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single-speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them. PMID:19143806
Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers
NASA Astrophysics Data System (ADS)
Caballero Morales, Santiago Omar; Cox, Stephen J.
2009-12-01
Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.
Cross-Language Distributions of High Frequency and Phonetically Similar Cognates
Schepens, Job; Dijkstra, Ton; Grootjen, Franc; van Heuven, Walter J. B.
2013-01-01
The coinciding form and meaning similarity of cognates, e.g. ‘flamme’ (French), ‘Flamme’ (German), ‘vlam’ (Dutch), meaning ‘flame’ in English, facilitates learning of additional languages. The cross-language frequency and similarity distributions of cognates vary according to evolutionary change and language contact. We compare frequency and orthographic (O), phonetic (P), and semantic similarity of cognates, automatically identified in semi-complete lexicons of six widely spoken languages. Comparisons of P and O similarity reveal inconsistent mappings in language pairs with deep orthographies. The frequency distributions show that cognate frequency is reduced in less closely related language pairs as compared to more closely related languages (e.g., French-English vs. German-English). These frequency and similarity patterns may support a better understanding of cognate processing in natural and experimental settings. The automatically identified cognates are available in the supplementary materials, including the frequency and similarity measurements. PMID:23675449
Multistage audiovisual integration of speech: dissociating identification and detection.
Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S
2011-02-01
Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.
Advances in EPG for treatment and research: an illustrative case study.
Scobbie, James M; Wood, Sara E; Wrench, Alan A
2004-01-01
Electropalatography (EPG), a technique which reveals tongue-palate contact patterns over time, is a highly effective tool for speech research. We report here on recent developments by Articulate Instruments Ltd. These include hardware for Windows-based computers, backwardly compatible (with Reading EPG3) software systems for clinical intervention and laboratory-based analysis for EPG and acoustic data, and an enhanced clinical interface with client and file management tools. We focus here on a single case study of a child aged 10+/-years who had been diagnosed with an intractable speech disorder possibly resulting ultimately from a complete cleft of hard and soft palate. We illustrate how assessment, diagnosis and treatment of the intractable speech disorder are undertaken using this new generation of instrumental phonetic support. We also look forward to future developments in articulatory phonetics that will link EPG with ultrasound for research and clinical communities.
TOUTIOS, ASTERIOS; NARAYANAN, SHRIKANTH S.
2016-01-01
Real-time magnetic resonance imaging (rtMRI) of the moving vocal tract during running speech production is an important emerging tool for speech production research providing dynamic information of a speaker's upper airway from the entire mid-sagittal plane or any other scan plane of interest. There have been several advances in the development of speech rtMRI and corresponding analysis tools, and their application to domains such as phonetics and phonological theory, articulatory modeling, and speaker characterization. An important recent development has been the open release of a database that includes speech rtMRI data from five male and five female speakers of American English each producing 460 phonetically balanced sentences. The purpose of the present paper is to give an overview and outlook of the advances in rtMRI as a tool for speech research and technology development. PMID:27833745
Mulak, Karen E.; Best, Catherine T.; Tyler, Michael D.; Kitamura, Christine; Irwin, Julia R.
2014-01-01
By 12 months, children grasp that a phonetic change to a word can change its identity (phonological distinctiveness). However, they must also grasp that some phonetic changes do not (phonological constancy). To test development of phonological constancy, 16 15-month-olds and 16 19-month-olds completed an eye-tracking task that tracked their gaze to named versus unnamed images for familiar words spoken in their native (Australian) and an unfamiliar non-native (Jamaican) regional accent of English. Both groups looked longer at named than unnamed images for Australian pronunciations, but only 19-month-olds did so for Jamaican pronunciations, indicating that phonological constancy emerges by 19 months. Vocabulary size predicted 15-month-olds' identifications for the Jamaican pronunciations, suggesting vocabulary growth is a viable predictor for phonological constancy development. PMID:23521607
Compton, Michael T; Lunden, Anya; Cleary, Sean D; Pauselli, Luca; Alolayan, Yazeed; Halpern, Brooke; Broussard, Beth; Crisafio, Anthony; Capulong, Leslie; Balducci, Pierfrancesco Maria; Bernardini, Francesco; Covington, Michael A
2018-02-12
Acoustic phonetic methods are useful in examining some symptoms of schizophrenia; we used such methods to understand the underpinnings of aprosody. We hypothesized that, compared to controls and patients without clinically rated aprosody, patients with aprosody would exhibit reduced variability in: pitch (F0), jaw/mouth opening and tongue height (formant F1), tongue front/back position and/or lip rounding (formant F2), and intensity/loudness. Audiorecorded speech was obtained from 98 patients (including 25 with clinically rated aprosody and 29 without) and 102 unaffected controls using five tasks: one describing a drawing, two based on spontaneous speech elicited through a question (Tasks 2 and 3), and two based on reading prose excerpts (Tasks 4 and 5). We compared groups on variation in pitch (F0), formant F1 and F2, and intensity/loudness. Regarding pitch variation, patients with aprosody differed significantly from controls in Task 5 in both unadjusted tests and those adjusted for sociodemographics. For the standard deviation (SD) of F1, no significant differences were found in adjusted tests. Regarding SD of F2, patients with aprosody had lower values than controls in Task 3, 4, and 5. For variation in intensity/loudness, patients with aprosody had lower values than patients without aprosody and controls across the five tasks. Findings could represent a step toward developing new methods for measuring and tracking the severity of this specific negative symptom using acoustic phonetic parameters; such work is relevant to other psychiatric and neurological disorders. Copyright © 2018 Elsevier B.V. All rights reserved.
Kharlamov, Viktor; Campbell, Kenneth; Kazanina, Nina
2011-11-01
Speech sounds are not always perceived in accordance with their acoustic-phonetic content. For example, an early and automatic process of perceptual repair, which ensures conformity of speech inputs to the listener's native language phonology, applies to individual input segments that do not exist in the native inventory or to sound sequences that are illicit according to the native phonotactic restrictions on sound co-occurrences. The present study with Russian and Canadian English speakers shows that listeners may perceive phonetically distinct and licit sound sequences as equivalent when the native language system provides robust evidence for mapping multiple phonetic forms onto a single phonological representation. In Russian, due to an optional but productive t-deletion process that affects /stn/ clusters, the surface forms [sn] and [stn] may be phonologically equivalent and map to a single phonological form /stn/. In contrast, [sn] and [stn] clusters are usually phonologically distinct in (Canadian) English. Behavioral data from identification and discrimination tasks indicated that [sn] and [stn] clusters were more confusable for Russian than for English speakers. The EEG experiment employed an oddball paradigm with nonwords [asna] and [astna] used as the standard and deviant stimuli. A reliable mismatch negativity response was elicited approximately 100 msec postchange in the English group but not in the Russian group. These findings point to a perceptual repair mechanism that is engaged automatically at a prelexical level to ensure immediate encoding of speech inputs in phonological terms, which in turn enables efficient access to the meaning of a spoken utterance.
Early language delay phenotypes and correlation with later linguistic abilities.
Petinou, Kakia; Spanoudis, George
2014-01-01
The present study focused on examining the continuity and directionality of language skills in late talkers (LTs) and identifying factors which might contribute to language outcomes at the age of 3 years. Subjects were 23 Cypriot-Greek-speaking toddlers classified as LTs and 24 age-matched typically developing peers (TDs). Participants were assessed at 28, 32 and 36 months, using various linguistic measures such as size of receptive and expressive vocabulary, mean length of utterance (MLU) of words and number of consonants produced. Data on otitis media familial history were also analyzed. The ANOVA results indicated parallel developmental profiles between the two groups, with a language lag characterizing LTs. Concurrent correlations between measures showed that poor phonetic inventories in the LT group at 28 months predicted poor MLU at the ages of 32 and 36 months. Significant cross-lagged correlations supported the finding that poor phonetic inventories at 28 months served as a good predictor for MLU and expressive vocabulary at the age of 32 and for MLU at 36 months. The results highlight the negative effect of early language delay on language skills up to the age of 3 years and lend support to the current literature regarding the universal linguistic picture of early and persistent language delay. Based on the current results, poor phonetic inventories at the age of intake might serve as a predictive factor for language outcomes at the age of 36 months. Finally, the findings are discussed in view of the need for further research with a focus on more language-sensitive tools in testing later language outcomes. © 2014 S. Karger AG, Basel.
Neural correlates of phonetic convergence and speech imitation
Garnier, Maëva; Lamalle, Laurent; Sato, Marc
2013-01-01
Speakers unconsciously tend to mimic their interlocutor's speech during communicative interaction. This study aims at examining the neural correlates of phonetic convergence and deliberate imitation, in order to explore whether imitation of phonetic features, deliberate, or unconscious, might reflect a sensory-motor recalibration process. Sixteen participants listened to vowels with pitch varying around the average pitch of their own voice, and then produced the identified vowels, while their speech was recorded and their brain activity was imaged using fMRI. Three degrees and types of imitation were compared (unconscious, deliberate, and inhibited) using a go-nogo paradigm, which enabled the comparison of brain activations during the whole imitation process, its active perception step, and its production. Speakers followed the pitch of voices they were exposed to, even unconsciously, without being instructed to do so. After being informed about this phenomenon, 14 participants were able to inhibit it, at least partially. The results of whole brain and ROI analyses support the fact that both deliberate and unconscious imitations are based on similar neural mechanisms and networks, involving regions of the dorsal stream, during both perception and production steps of the imitation process. While no significant difference in brain activation was found between unconscious and deliberate imitations, the degree of imitation, however, appears to be determined by processes occurring during the perception step. Four regions of the dorsal stream: bilateral auditory cortex, bilateral supramarginal gyrus (SMG), and left Wernicke's area, indeed showed an activity that correlated significantly with the degree of imitation during the perception step. PMID:24062704
Australian children with cleft palate achieve age-appropriate speech by 5 years of age.
Chacon, Antonia; Parkin, Melissa; Broome, Kate; Purcell, Alison
2017-12-01
Children with cleft palate demonstrate atypical speech sound development, which can influence their intelligibility, literacy and learning. There is limited documentation regarding how speech sound errors change over time in cleft palate speech and the effect that these errors have upon mono-versus polysyllabic word production. The objective of this study was to examine the phonetic and phonological speech skills of children with cleft palate at ages 3 and 5. A cross-sectional observational design was used. Eligible participants were aged 3 or 5 years with a repaired cleft palate. The Diagnostic Evaluation of Articulation and Phonology (DEAP) Articulation subtest and a non-standardised list of mono- and polysyllabic words were administered once for each child. The Profile of Phonology (PROPH) was used to analyse each child's speech. N = 51 children with cleft palate participated in the study. Three-year-old children with cleft palate produced significantly more speech errors than their typically-developing peers, but no difference was apparent at 5 years. The 5-year-olds demonstrated greater phonetic and phonological accuracy than the 3-year-old children. Polysyllabic words were more affected by errors than monosyllables in the 3-year-old group only. Children with cleft palate are prone to phonetic and phonological speech errors in their preschool years. Most of these speech errors approximate typically-developing children by 5 years. At 3 years, word shape has an influence upon phonological speech accuracy. Speech pathology intervention is indicated to support the intelligibility of these children from their earliest stages of development. Copyright © 2017 Elsevier B.V. All rights reserved.
Symbols for the General British English Vowel Sounds
ERIC Educational Resources Information Center
Lewis, J. Windsor
1975-01-01
Deals with the critique of Hans G. Hoffmann saying that the new phonetic symbols contained in A. S. Hornby's "Advanced Learner's Dictionary" (Oxford University Press, London, 1974) are harder to learn than the older system of transcription. (IFS/WGA)
Papers and Studies in Contrastive Linguistics. Volume Twenty.
ERIC Educational Resources Information Center
Fisiak, Jacek, Ed.
Papers on contrastive linguistics in this volume include: "Contrastive Discourse Analysis in Language Usage" (Juliane House); "Typology and Contrastive Analysis" (Vlasta Strakova); "On the Tenability of the Notion 'Pragmatic Equivalence' in Contrastive Analysis" (Karol Janicki); "On the Relevance of Phonetic,…
Multilevel Analysis in Analyzing Speech Data
ERIC Educational Resources Information Center
Guddattu, Vasudeva; Krishna, Y.
2011-01-01
The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…
Careers in Speech Communication.
ERIC Educational Resources Information Center
Speech Communication Association, New York, NY.
Brief discussions in this pamphlet suggest educational and career opportunities in the following fields of speech communication: rhetoric, public address, and communication; theatre, drama, and oral interpretation; radio, television, and film; speech pathology and audiology; speech science, phonetics, and linguistics; and speech education.…
Artificial Intelligence in Speech Understanding: Two Applications at C.R.I.N.
ERIC Educational Resources Information Center
Carbonell, N.; And Others
1986-01-01
This article explains how techniques of artificial intelligence are applied to expert systems for acoustic-phonetic decoding, phonological interpretation, and multi-knowledge sources for man-machine dialogue implementation. The basic ideas are illustrated with short examples. (Author/JDH)
77 FR 68204 - Privacy Act of 1974, as Amended
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-15
... (optional); phonetic name (optional); skills/experience (optional); educational background (optional... information to the United States Department of Justice for the purpose of representing or providing legal... or her individual capacity where the Department of Justice or Treasury has agreed to represent the...
ERIC Educational Resources Information Center
Fletcher, Paul
1989-01-01
Discusses the role of linguistics in the investigation of language disorders, focusing on the application of phonetics, descriptive grammatic frameworks, grammatical theory, and concepts from semantics and pragmatics to a variety of disorders and their remediation. Some trends and examples from the field of clinical linguistics are discussed. (GLR)
How People Listen to Languages They Don't Know.
ERIC Educational Resources Information Center
Lorch, Marjorie Perlman; Meara, Paul
1989-01-01
Investigation of how 19 adult males listened to and recognized unknown foreign languages (Farsi, Punjabi, Spanish, Indonesian, Arabic, Urdu) indicated that the untrained listeners made complex judgments in describing, transcribing, and identifying phonetic, segmental, suprasegmental, and other impressionistic language details. (Author/CB)
Liu, Xiaojin; Tu, Liu; Wang, Junjing; Jiang, Bo; Gao, Wei; Pan, Ximin; Li, Meng; Zhong, Miao; Zhu, Zhenzhen; Niu, Meiqi; Li, Yanyan; Zhao, Ling; Chen, Xiaoxi; Liu, Chang; Lu, Zhi; Huang, Ruiwang
2017-11-01
Early second language (L2) experience influences the neural organization of L2 in neuro-plastic terms. Previous studies tried to reveal these plastic effects of age of second language acquisition (AoA-L2) and proficiency-level in L2 (PL-L2) on the neural basis of language processing in bilinguals. Although different activation patterns have been observed during language processing in early and late bilinguals by task-fMRI, few studies reported the effect of AoA-L2 and high PL-L2 on language network at resting state. In this study, we acquired resting-state fMRI (R-fMRI) data from 10 Cantonese (L1)-Mandarin (L2) early bilinguals (acquired L2: 3years old) and 11 late bilinguals (acquired L2: 6years old), and analyzed their topological properties of language networks after controlling the language daily exposure and usage as well as PL in L1 and L2. We found that early bilinguals had significantly a higher clustering coefficient, global and local efficiency, but significantly lower characteristic path length compared to late bilinguals. Modular analysis indicated that compared to late bilinguals, early bilinguals showed significantly stronger intra-modular functional connectivity in the semantic and phonetic modules, stronger inter-modular functional connectivity between the semantic and phonetic modules as well as between the phonetic and syntactic modules. Differences in global and local parameters may reflect different patterns of neuro-plasticity respectively for early and late bilinguals. These results suggested that different L2 experience influences topological properties of language network, even if late bilinguals achieve high PL-L2. Our findings may provide a new perspective of neural mechanisms related to early and late bilinguals. Copyright © 2017 Elsevier Inc. All rights reserved.
McGettigan, Carolyn; Rosen, Stuart; Scott, Sophie K.
2014-01-01
Noise-vocoding is a transformation which, when applied to speech, severely reduces spectral resolution and eliminates periodicity, yielding a stimulus that sounds “like a harsh whisper” (Scott et al., 2000, p. 2401). This process simulates a cochlear implant, where the activity of many thousand hair cells in the inner ear is replaced by direct stimulation of the auditory nerve by a small number of tonotopically-arranged electrodes. Although a cochlear implant offers a powerful means of restoring some degree of hearing to profoundly deaf individuals, the outcomes for spoken communication are highly variable (Moore and Shannon, 2009). Some variability may arise from differences in peripheral representation (e.g., the degree of residual nerve survival) but some may reflect differences in higher-order linguistic processing. In order to explore this possibility, we used noise-vocoding to explore speech recognition and perceptual learning in normal-hearing listeners tested across several levels of the linguistic hierarchy: segments (consonants and vowels), single words, and sentences. Listeners improved significantly on all tasks across two test sessions. In the first session, individual differences analyses revealed two independently varying sources of variability: one lexico-semantic in nature and implicating the recognition of words and sentences, and the other an acoustic-phonetic factor associated with words and segments. However, consequent to learning, by the second session there was a more uniform covariance pattern concerning all stimulus types. A further analysis of phonetic feature recognition allowed greater insight into learning-related changes in perception and showed that, surprisingly, participants did not make full use of cues that were preserved in the stimuli (e.g., vowel duration). We discuss these findings in relation cochlear implantation, and suggest auditory training strategies to maximize speech recognition performance in the absence of typical cues. PMID:24616669
Speech characteristics in a Ugandan child with a rare paramedian craniofacial cleft: a case report.
Van Lierde, K M; Bettens, K; Luyten, A; De Ley, S; Tungotyo, M; Balumukad, D; Galiwango, G; Bauters, W; Vermeersch, H; Hodges, A
2013-03-01
The purpose of this study is to describe the speech characteristics in an English-speaking Ugandan boy of 4.5 years who has a rare paramedian craniofacial cleft (unilateral lip, alveolar, palatal, nasal and maxillary cleft, and associated hypertelorism). Closure of the lip together with the closure of the hard and soft palate (one-stage palatal closure) was performed at the age of 5 months. Objective as well as subjective speech assessment techniques were used. The speech samples were perceptually judged for articulation, intelligibility and nasality. The Nasometer was used for the objective measurement of the nasalance values. The most striking communication problems in this child with the rare craniofacial cleft are an incomplete phonetic inventory, a severely impaired speech intelligibility with the presence of very severe hypernasality, mild nasal emission, phonetic disorders (omission of several consonants, decreased intraoral pressure in explosives, insufficient frication of fricatives and the use of a middorsum palatal stop) and phonological disorders (deletion of initial and final consonants and consonant clusters). The increased objective nasalance values are in agreement with the presence of the audible nasality disorders. The results revealed that several phonetic and phonological articulation disorders together with a decreased speech intelligibility and resonance disorders are present in the child with a rare craniofacial cleft. To what extent a secondary surgery for velopharyngeal insufficiency, combined with speech therapy, will improve speech intelligibility, articulation and resonance characteristics is a subject for further research. The results of such analyses may ultimately serve as a starting point for specific surgical and logopedic treatment that addresses the specific needs of children with rare facial clefts. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Meyer, Ted A.; Pisoni, David B.
2012-01-01
Objective The Phonetically Balanced Kindergarten (PBK) Test (Haskins, Reference Note 2) has been used for almost 50 yr to assess spoken word recognition performance in children with hearing impairments. The test originally consisted of four lists of 50 words, but only three of the lists (lists 1, 3, and 4) were considered “equivalent” enough to be used clinically with children. Our goal was to determine if the lexical properties of the different PBK lists could explain any differences between the three “equivalent” lists and the fourth PBK list (List 2) that has not been used in clinical testing. Design Word frequency and lexical neighborhood frequency and density measures were obtained from a computerized database for all of the words on the four lists from the PBK Test as well as the words from a single PB-50 (Egan, 1948) word list. Results The words in the “easy” PBK list (List 2) were of higher frequency than the words in the three “equivalent” lists. Moreover, the lexical neighborhoods of the words on the “easy” list contained fewer phonetically similar words than the neighborhoods of the words on the other three “equivalent” lists. Conclusions It is important for researchers to consider word frequency and lexical neighborhood frequency and density when constructing word lists for testing speech perception. The results of this computational analysis of the PBK Test provide additional support for the proposal that spoken words are recognized “relationally” in the context of other phonetically similar words in the lexicon. Implications of using open-set word recognition tests with children with hearing impairments are discussed with regard to the specific vocabulary and information processing demands of the PBK Test. PMID:10466571
Deep Neural Networks for Speech Separation With Application to Robust Speech Recognition
acoustic -phonetic features. The second objective is integration of spectrotemporal context for improved separation performance. Conditional random fields...will be used to encode contextual constraints. The third objective is to achieve robust ASR in the DNN framework through integrated acoustic modeling
Perspectives on Interlanguage Phonetics and Phonology.
ERIC Educational Resources Information Center
Monroy, Rafael, Ed.; Gutierrez, Francisco, Ed.
2001-01-01
Articles in this special issue include the following: "Allophonic Splits in L2 Phonology: The Questions of Learnability" (Fred R. Eckman, Abdullah Elreyes, Gregory K. Iverson); "Native Language Influence in Learners' Assessment of English Focus" (M. L. Garcia Lecumberri); "Obstruent Voicing in English and Polish. A…
Word Recognition in Auditory Cortex
ERIC Educational Resources Information Center
DeWitt, Iain D. J.
2013-01-01
Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…
ERIC Educational Resources Information Center
Tauberer, Joshua Ian
2010-01-01
The [voice] distinction between homorganic stops and fricatives is made by a number of acoustic correlates including voicing, segment duration, and preceding vowel duration. The present work looks at [voice] from a number of multidimensional perspectives. This dissertation's focus is a corpus study of the phonetic realization of [voice] in two…
Chez nous: mon village (At Our House: My Village).
ERIC Educational Resources Information Center
Dube, Normand
This elementary French reader was designed for use in a bilingual program. It contains reading selections about life in Madawaska, Maine, illustrated vocabulary lists, and discussion questions. Also included are the words and music for two short songs and four phonetic drills. (PMP)
Alternative Control Technologies: Human Factors Issues
1998-10-01
that instant. This removes the workload associated and, over a long period, apply painful pressure to the face. with having to remember which words...shown that phonetically-relevant orofacial motions can be estimated from the underlying EMG activity. 4.4. EMG-BASED CONTROL APPLICATION EXAMPLES 30
Processing Problems and Language Impairment in Children.
ERIC Educational Resources Information Center
Watkins, Ruth V.
1990-01-01
The article reviews studies on the assessment of rapid auditory processing abilities. Issues in auditory processing research are identified including a link between otitis media with effusion and language learning problems. A theory that linguistically impaired children experience difficulty in perceiving and processing low phonetic substance…
Asian-Pacific Papers. Occasional Papers Number 10.
ERIC Educational Resources Information Center
McCarthy, Brian, Ed.
Sixteen papers are presented. Topics covered include language teaching, discourse analysis, code switching, phonetics, language and cultural identity, and descriptive and comparative studies. All presenters were from the Asia-Pacific area of the world. Papers include: "The Baba Malay Lexicon: Hokkien Loanwords in Baba Malay" (Anne…
The Prosodic Basis of the Tiberian Hebrew System of Accents.
ERIC Educational Resources Information Center
Dresher, Bezalel Elan
1994-01-01
It is argued that the Tiberian system of accents that annotate the text of the Hebrew Bible has a prosodic basis. Tiberian representation can best be understood by integrating results of phonological, phonetic, and psycholinguistic research on prosodic structure. (93 references) (Author/LB)
Speech Analyses of Four Children with Repaired Cleft Palates.
ERIC Educational Resources Information Center
Powers, Gene R.; And Others
1990-01-01
Spontaneous speech samples were collected from four three-year olds with surgically repaired cleft palates. Analyses showed that subjects were similar to one another with respect to their phonetic inventories but differed considerably in the frequency and types of phonological processes used. (Author/JDD)
Transcribing Speech: Practicalities, Philosophies and Prophesies
ERIC Educational Resources Information Center
Rahilly, Joan
2011-01-01
This article outlines the main practical and philosophical developments which have contributed to current approaches to phonetic transcription. Particular contributions from scholars in the field are highlighted as seminal in shaping transcription work. Consideration is also given to the ways in which insights from clinical transcription impact…
ERIC Educational Resources Information Center
Snyder, Sarah
A booklet for limited English speakers on renting housing provides information on searching for housing, finding the right place, considerations before signing a lease, and relations with the landlord. Cartoons, questions about the message in cartoons and narrative passages, checklists on things to consider, and the phonetic pronunciation of key…
ERIC Educational Resources Information Center
Speight, Stephen
1977-01-01
The latest (July, 1976) edition of the "Concise Oxford Dictionary" is seen as "prescriptive," and of limited use to foreigners, since it lacks an international phonetic transcription. It is questioned whether sufficient treatment is given to new words, scientific words, non-British English, obscene language, change of meaning, and obsolescence.…
Training Japanese listeners to identify English /r/ and /l/: A first report
Logan, John S.; Lively, Scott E.; Pisoni, David B.
2012-01-01
Native speakers of Japanese learning English generally have difficulty differentiating the phonemes /r/ and /l/, even after years of experience with English. Previous research that attempted to train Japanese listeners to distinguish this contrast using synthetic stimuli reported little success, especially when transfer to natural tokens containing /r/ and /l/ was tested. In the present study, a different training procedure that emphasized variability among stimulus tokens was used. Japanese subjects were trained in a minimal pair identification paradigm using multiple natural exemplars contrasting /r/ and /l/ from a variety of phonetic environments as stimuli. A pretest–posttest design containing natural tokens was used to assess the effects of training. Results from six subjects showed that the new procedure was more robust than earlier training techniques. Small but reliable differences in performance were obtained between pretest and posttest scores. The results demonstrate the importance of stimulus variability and task-related factors in training nonnative speakers to perceive novel phonetic contrasts that are not distinctive in their native language. PMID:2016438
Munson, Benjamin; Johnson, Julie M.; Edwards, Jan
2013-01-01
Purpose This study examined whether experienced speech-language pathologists differ from inexperienced people in their perception of phonetic detail in children's speech. Method Convenience samples comprising 21 experienced speech-language pathologist and 21 inexperienced listeners participated in a series of tasks in which they made visual-analog scale (VAS) ratings of children's natural productions of target /s/-/θ/, /t/-/k/, and /d/-/ɡ/ in word-initial position. Listeners rated the perception distance between individual productions and ideal productions. Results The experienced listeners' ratings differed from inexperienced listeners' in four ways: they had higher intra-rater reliability, they showed less bias toward a more frequent sound, their ratings were more closely related to the acoustic characteristics of the children's speech, and their responses were related to a different set of predictor variables. Conclusions Results suggest that experience working as a speech-language pathologist leads to better perception of phonetic detail in children's speech. Limitations and future research are discussed. PMID:22230182
Core, Cynthia; Brown, Janean W; Larsen, Michael D; Mahshie, James
2014-01-01
The objectives of this research were to determine whether an adapted version of a Hybrid Visual Habituation procedure could be used to assess speech perception of phonetic and prosodic features of speech (vowel height, lexical stress, and intonation) in individual pre-school-age children who use cochlear implants. Nine children ranging in age from 3;4 to 5;5 participated in this study. Children were prelingually deaf and used cochlear implants and had no other known disabilities. Children received two speech feature tests using an adaptation of a Hybrid Visual Habituation procedure. Seven of the nine children demonstrated perception of at least one speech feature using this procedure using results from a Bayesian linear regression analysis. At least one child demonstrated perception of each speech feature using this assessment procedure. An adapted version of the Hybrid Visual Habituation Procedure with an appropriate statistical analysis provides a way to assess phonetic and prosodicaspects of speech in pre-school-age children who use cochlear implants.
Phonetic Spelling Filter for Keyword Selection in Drug Mention Mining from Social Media
Pimpalkhute, Pranoti; Patki, Apurv; Nikfarjam, Azadeh; Gonzalez, Graciela
2014-01-01
Social media postings are rich in information that often remain hidden and inaccessible for automatic extraction due to inherent limitations of the site’s APIs, which mostly limit access via specific keyword-based searches (and limit both the number of keywords and the number of postings that are returned). When mining social media for drug mentions, one of the first problems to solve is how to derive a list of variants of the drug name (common misspellings) that can capture a sufficient number of postings. We present here an approach that filters the potential variants based on the intuition that, faced with the task of writing an unfamiliar, complex word (the drug name), users will tend to revert to phonetic spelling, and we thus give preference to variants that reflect the phonemes of the correct spelling. The algorithm allowed us to capture 50.4 – 56.0 % of the user comments using only about 18% of the variants. PMID:25717407
Hughes, J Antony; Phillips, Gordon; Reed, Phil
2013-01-01
Basic literacy skills underlie much future adult functioning, and are targeted in children through a variety of means. Children with reading problems were exposed either to a self-paced computer programme that focused on improving phonetic ability, or underwent a classroom-based reading intervention. Exposure was limited to 3 40-min sessions a week, for six weeks. The children were assessed in terms of their reading, spelling, and mathematics abilities, as well as for their externalising and internalising behaviour problems, before the programme commenced, and immediately after the programme terminated. Relative to the control group, the computer-programme improved reading by about seven months in boys (but not in girls), but had no impact on either spelling or mathematics. Children on the programme also demonstrated fewer externalising and internalising behaviour problems than the control group. The results suggest that brief exposure to a self-paced phonetic computer-teaching programme had some benefits for the sample.
Yoo, Sejin; Chung, Jun-Young; Jeon, Hyeon-Ae; Lee, Kyoung-Min; Kim, Young-Bo; Cho, Zang-Hee
2012-07-01
Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary. Copyright © 2012 Elsevier Inc. All rights reserved.
The stability of locus equation slopes across stop consonant voicing/aspiration
NASA Astrophysics Data System (ADS)
Sussman, Harvey M.; Modarresi, Golnaz
2004-05-01
The consistency of locus equation slopes as phonetic descriptors of stop place in CV sequences across voiced and voiceless aspirated stops was explored in the speech of five male speakers of American English and two male speakers of Persian. Using traditional locus equation measurement sites for F2 onsets, voiceless labial and coronal stops had significantly lower locus equation slopes relative to their voiced counterparts, whereas velars failed to show voicing differences. When locus equations were derived using F2 onsets for voiced stops that were measured closer to the stop release burst, comparable to the protocol for measuring voiceless aspirated stops, no significant effects of voicing/aspiration on locus equation slopes were observed. This methodological factor, rather than an underlying phonetic-based explanation, provides a reasonable account for the observed flatter locus equation slopes of voiceless labial and coronal stops relative to voiced cognates reported in previous studies [Molis et al., J. Acoust. Soc. Am. 95, 2925 (1994); O. Engstrand and B. Lindblom, PHONUM 4, 101-104]. [Work supported by NIH.
Extrinsic cognitive load impairs low-level speech perception.
Mattys, Sven L; Barden, Katharine; Samuel, Arthur G
2014-06-01
Recent research has suggested that the extrinsic cognitive load generated by performing a nonlinguistic visual task while perceiving speech increases listeners' reliance on lexical knowledge and decreases their capacity to perceive phonetic detail. In the present study, we asked whether this effect is accounted for better at a lexical or a sublexical level. The former would imply that cognitive load directly affects lexical activation but not perceptual sensitivity; the latter would imply that increased lexical reliance under cognitive load is only a secondary consequence of imprecise or incomplete phonetic encoding. Using the phoneme restoration paradigm, we showed that perceptual sensitivity decreases (i.e., phoneme restoration increases) almost linearly with the effort involved in the concurrent visual task. However, cognitive load had only a minimal effect on the contribution of lexical information to phoneme restoration. We concluded that the locus of extrinsic cognitive load on the speech system is perceptual rather than lexical. Mechanisms by which cognitive load increases tolerance to acoustic imprecision and broadens phonemic categories were discussed.
Support for linguistic macrofamilies from weighted sequence alignment
Jäger, Gerhard
2015-01-01
Computational phylogenetics is in the process of revolutionizing historical linguistics. Recent applications have shed new light on controversial issues, such as the location and time depth of language families and the dynamics of their spread. So far, these approaches have been limited to single-language families because they rely on a large body of expert cognacy judgments or grammatical classifications, which is currently unavailable for most language families. The present study pursues a different approach. Starting from raw phonetic transcription of core vocabulary items from very diverse languages, it applies weighted string alignment to track both phonetic and lexical change. Applied to a collection of ∼1,000 Eurasian languages and dialects, this method, combined with phylogenetic inference, leads to a classification in excellent agreement with established findings of historical linguistics. Furthermore, it provides strong statistical support for several putative macrofamilies contested in current historical linguistics. In particular, there is a solid signal for the Nostratic/Eurasiatic macrofamily. PMID:26403857
Harris, Margaret; Moreno, Constanza
2006-01-01
Nine children with severe-profound prelingual hearing loss and single-word reading scores not more than 10 months behind chronological age (Good Readers) were matched with 9 children whose reading lag was at least 15 months (Poor Readers). Good Readers had significantly higher spelling and reading comprehension scores. They produced significantly more phonetic errors (indicating the use of phonological coding) and more often correctly represented the number of syllables in spelling than Poor Readers. They also scored more highly on orthographic awareness and were better at speech reading. Speech intelligibility was the same in the two groups. Cluster analysis revealed that only three Good Readers showed strong evidence of phonetic coding in spelling although seven had good representation of syllables; only four had high orthographic awareness scores. However, all 9 children were good speech readers, suggesting that a phonological code derived through speech reading may underpin reading success for deaf children.
Preparing novice teachers to develop basic reading and spelling skills in children.
Spear-Swerling, Louise; Brucker, Pamela Owen
2004-12-01
This study examined the word-structure knowledge of novice teachers and the progress of children tutored by a subgroup of the teachers. Teachers' word-structure knowledge was assessed using three tasks: graphophonemic segmentation, classification of pseudowords by syllable type, and classification of real words as phonetically regular or irregular. Tutored children were assessed on several measures of basic reading and spelling skills. Novice teachers who received word-structure instruction outperformed a comparison group of teachers in word-structure knowledge at post-test. Tutored children improved significantly from pre-test to post-test on all assessments. Teachers' post-test knowledge on the graphophonemic segmentation and irregular words tasks correlated significantly with tutored children's progress in decoding phonetically regular words; error analyses indicated links between teachers' patterns of word-structure knowledge and children's patterns of decoding progress. The study suggests that word-structure knowledge is important to effective teaching of word decoding and underscores the need to include this information in teacher preparation.
Baqué, Lorraine
2017-01-01
This study sought to investigate stress production in Spanish by patients with Broca's (BA) and conduction aphasia (CA) as compared to controls. Our objectives were to assess whether: a) there were many abnormal acoustic correlates of stress as produced by patients, b) these abnormalities had a phonetic component and c) ability for articulatory compensation for stress marking was preserved. The results showed abnormal acoustic values in both BA and CA's productions, affecting not only duration but also F0 and intensity cues, and an interaction effect of stress pattern and duration on intensity cubes in BA, but not in CA or controls. The results are interpreted as deriving from two different underlying phenomena: in BA, a compensatory use of intensity as a stress cue in order to avoid 'equal stress'; in CA, related to either a 'subtle phonetic deficit' involving abnormal stress acoustic cue-processing or to 'clear-speech' effects.
Phonetic spelling filter for keyword selection in drug mention mining from social media.
Pimpalkhute, Pranoti; Patki, Apurv; Nikfarjam, Azadeh; Gonzalez, Graciela
2014-01-01
Social media postings are rich in information that often remain hidden and inaccessible for automatic extraction due to inherent limitations of the site's APIs, which mostly limit access via specific keyword-based searches (and limit both the number of keywords and the number of postings that are returned). When mining social media for drug mentions, one of the first problems to solve is how to derive a list of variants of the drug name (common misspellings) that can capture a sufficient number of postings. We present here an approach that filters the potential variants based on the intuition that, faced with the task of writing an unfamiliar, complex word (the drug name), users will tend to revert to phonetic spelling, and we thus give preference to variants that reflect the phonemes of the correct spelling. The algorithm allowed us to capture 50.4 - 56.0 % of the user comments using only about 18% of the variants.
Non-Selective Lexical Access in Late Arabic-English Bilinguals: Evidence from Gating.
Boudelaa, Sami
2018-02-07
Previous research suggests that late bilinguals who speak typologically distant languages are the least likely to show evidence of non-selective lexical access processes. This study puts this claim to test by using the gating task to determine whether words beginning with speech sounds that are phonetically similar in Arabic and English (e.g., [b,d,m,n]) give rise to selective or non-selective lexical access processes in late Arabic-English bilinguals. The results show that an acoustic-phonetic input (e.g., [bæ]) that is consistent with words in Arabic (e.g., [bædrun] "moon") and English (e.g., [bæd] "bad") activates lexical representations in both languages of the bilingual. This non-selective activation holds equally well for mixed lists with words from both Arabic and English and blocked lists consisting only of Arabic or English words. These results suggest that non-selective lexical access processes are the default mechanism even in late bilinguals of typologically distant languages.
Hsiao, Janet H; Cheung, Kit
2016-03-01
In Chinese orthography, the most common character structure consists of a semantic radical on the left and a phonetic radical on the right (SP characters); the minority, opposite arrangement also exists (PS characters). Recent studies showed that SP character processing is more left hemisphere (LH) lateralized than PS character processing. Nevertheless, it remains unclear whether this is due to phonetic radical position or character type frequency. Through computational modeling with artificial lexicons, in which we implement a theory of hemispheric asymmetry in perception but do not assume phonological processing being LH lateralized, we show that the difference in character type frequency alone is sufficient to exhibit the effect that the dominant type has a stronger LH lateralization than the minority type. This effect is due to higher visual similarity among characters in the dominant type than the minority type, demonstrating the modulation of visual similarity of words on hemispheric lateralization. Copyright © 2015 Cognitive Science Society, Inc.
Gow, David W; Olson, Bruna B
2015-07-01
Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.
The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening
Cibelli, Emily S.; Leonard, Matthew K.; Johnson, Keith; Chang, Edward F.
2015-01-01
Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge. PMID:26072003
Gow, David W.; Olson, Bruna B.
2015-01-01
Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical “gang effects” in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account. PMID:25883413
Development of Phonological Constancy
Best, Catherine T.; Tyler, Michael D.; Gooding, Tiffany N.; Orlando, Corey B.; Quann, Chelsea A.
2009-01-01
Efficient word recognition depends on detecting critical phonetic differences among similar-sounding words, or sensitivity to phonological distinctiveness, an ability evident at 19 months of age but unreliable at 14 to 15 months of age. However, little is known about phonological constancy, the equally crucial ability to recognize a word's identity across natural phonetic variations, such as those in cross-dialect pronunciation differences. We show that 15- and 19-month-old children recognize familiar words spoken in their native dialect, but that only the older children recognize familiar words in a dissimilar nonnative dialect, providing evidence for emergence of phonological constancy by 19 months. These results are compatible with a perceptual-attunement account of developmental change in early word recognition, but not with statistical-learning or phonological accounts. Thus, the complementary skills of phonological constancy and distinctiveness both appear at around 19 months of age, together providing the child with a fundamental insight that permits rapid vocabulary growth and later reading acquisition. PMID:19368700
Word Length and Lexical Activation: Longer Is Better
ERIC Educational Resources Information Center
Pitt, Mark A.; Samuel, Arthur G.
2006-01-01
Many models of spoken word recognition posit the existence of lexical and sublexical representations, with excitatory and inhibitory mechanisms used to affect the activation levels of such representations. Bottom-up evidence provides excitatory input, and inhibition from phonetically similar representations leads to lexical competition. In such a…
Disorders of Articulation. Prentice-Hall Foundations of Speech Pathology Series.
ERIC Educational Resources Information Center
Carrell, James A.
Designed for students of speech pathology and audiology and for practicing clinicians, the text considers the nature of the articulation process, criteria for diagnosis, and classification and etiology of disorders. Also discussed are phonetic characteristics, including phonemic errors and configurational and contextual defects; and functional…
Dynamics of Phonological Cognition
ERIC Educational Resources Information Center
Gafos, Adamantios I.; Benus, Stefan
2006-01-01
A fundamental problem in spoken language is the duality between the continuous aspects of phonetic performance and the discrete aspects of phonological competence. We study 2 instances of this problem from the phenomenon of voicing neutralization and vowel harmony. In each case, we present a model where the experimentally observed continuous…
Phonological and Phonetic Asymmetries of Cw Combinations
ERIC Educational Resources Information Center
Suh, Yunju
2009-01-01
This thesis investigates the relationship between the phonological distribution of Cw combinations, and the acoustic/perceptual distinctiveness between syllables with plain C onsets and with Cw combination onsets. Distributional asymmetries of Cw combinations discussed in this thesis include the avoidance of Cw combinations in the labial consonant…
ERIC Educational Resources Information Center
Snyder, Sarah
A booklet for limited English speakers on money management provides information on savings accounts, checking accounts, choosing a bank, and the basics of budgeting. Cartoons, questions about the message in cartoons and narrative passages, checklists on things to consider, and the phonetic pronunciation of key words are presented. Specific topics…
Phonetics Information Base and Lexicon
ERIC Educational Resources Information Center
Moran, Steven Paul
2012-01-01
In this dissertation, I investigate the linguistic and technological challenges involved in creating a cross-linguistic data set to undertake phonological typology. I then address the question of whether more sophisticated, knowledge-based approaches to data modeling, coupled with a broad cross-linguistic data set, can extend previous typological…
Measuring Syntactic Complexity in Spontaneous Spoken Swedish
ERIC Educational Resources Information Center
Roll, Mikael; Frid, Johan; Horne, Merle
2007-01-01
Hesitation disfluencies after phonetically prominent stranded function words are thought to reflect the cognitive coding of complex structures. Speech fragments following the Swedish function word "att" "that" were analyzed syntactically, and divided into two groups: one with "att" in disfluent contexts, and the other with "att" in fluent…
The Role of Phonetics in the Teaching of Foreign Languages in India
ERIC Educational Resources Information Center
Bansal, R. K.
1974-01-01
Oral work is considered the most effective way of laying the foundations for language proficiency. Recognition and production of vowels and consonants, use of a pronouncing dictionary, and practice in accent rhythm and intonation should all be included in a pronunciation course. (SC)
Interaction in Bilingual Phonological Acquisition: Evidence from Phonetic Inventories
ERIC Educational Resources Information Center
Fabiano-Smith, Leah; Barlow, Jessica A.
2010-01-01
Purpose: To examine how interaction contributes to phonological acquisition in bilingual children in order to determine what constitutes typical development of bilingual speech sound inventories. Method: Twenty-four children, ages 3-4, were included: eight bilingual Spanish-English-speaking children, eight monolingual Spanish speakers, and eight…
Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears
ERIC Educational Resources Information Center
Chen, Sau-Chin; Hu, Jon-Fan
2015-01-01
Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…
Language Universals and Misidentification: A Two-Way Street
ERIC Educational Resources Information Center
Berent, Iris; Lennertz, Tracy; Balaban, Evan
2012-01-01
Certain ill-formed phonological structures are systematically under-represented across languages and misidentified by human listeners. It is currently unclear whether this results from grammatical phonological knowledge that actively recodes ill-formed structures, or from difficulty with their phonetic encoding. To examine this question, we gauge…
ERIC Educational Resources Information Center
Snyder, Sarah
A booklet for limited English speakers on dealing with consumer problems provides information on talking to businesses, getting someone on your side, going to the law, and stopping problems before they start. Cartoons, questions about the message in cartoons and narrative passages, checklists on things to consider, and the phonetic pronunciation…
The Role of Idiomorphs in Emergent Literacy
ERIC Educational Resources Information Center
Neumann, Michelle M.; Neumann, David L.
2012-01-01
Psycholinguistics coined the term idiomorph to describe idiosyncratic invented word-like units that toddlers use to refer to familiar objects during their early language development (Haslett & Samter, 1997; Otto, 2008; Reich, 1986; Scovel, 2004; Werner & Kaplan, 1963). Idiomorphs act as "words" because their meanings and phonetic pronunciations…
Computer Processing of Esperanto Text.
ERIC Educational Resources Information Center
Sherwood, Bruce
1981-01-01
Basic aspects of computer processing of Esperanto are considered in relation to orthography and computer representation, phonetics, morphology, one-syllable and multisyllable words, lexicon, semantics, and syntax. There are 28 phonemes in Esperanto, each represented in orthography by a single letter. The PLATO system handles diacritics by using a…
French Dictionaries. Series: Specialised Bibliographies.
ERIC Educational Resources Information Center
Klaar, R. M.
This is a list of French monolingual, French-English and English-French dictionaries available in December 1975. Dictionaries of etymology, phonetics, place names, proper names, and slang are included, as well as dictionaries for children and dictionaries of Belgian, Canadian, and Swiss French. Most other specialized dictionaries, encyclopedias,…
Reliability and Factorial Validity of the Artes de Lenguaje.
ERIC Educational Resources Information Center
Powers, Stephen; And Others
1984-01-01
Spanish speaking first graders were administered the Artes de Lenguage (ADL)--a Spanish, criterion-referenced, language arts test. Reliability analyses indicated the adequacy of three of the four subscales (Phonetic Analysis, Vocabulary Development, Comprehension Skills, and General Skills). A principal factors analysis of the intercorrelation…
An Introduction to Descriptive Linguistics. Revised Edition.
ERIC Educational Resources Information Center
Gleason, H.A., Jr.
Beginning chapters of this volume define language and describe the sound, stress, and intonation systems of English. The body of the text explores extensively morphology, phonetics, phonemics, and the process of communication. Individual chapters detail such topics as morphemes, syntactic devices, grammatical systems, phonemic problems in language…
Tracking Speech Sound Acquisition
ERIC Educational Resources Information Center
Powell, Thomas W.
2011-01-01
This article describes a procedure to aid in the clinical appraisal of child speech. The approach, based on the work by Dinnsen, Chin, Elbert, and Powell (1990; Some constraints on functionally disordered phonologies: Phonetic inventories and phonotactics. "Journal of Speech and Hearing Research", 33, 28-37), uses a railway idiom to track gains in…
Aspects of the Teaching of Russian.
ERIC Educational Resources Information Center
Baker, Robert L.
The process of learning Russian should be no more difficult than the process of learning other languages although it may take somewhat longer. The phonetic system should not present major difficulties with respect to individual sounds, but intonation may be difficult because Russian pitch patterns represent different intentions and emotions than…
Effects of Participant Engagement on Prosodic Prominence
ERIC Educational Resources Information Center
Buxó-Lugo, Andrés; Toscano, Joseph C.; Watson, Duane G.
2018-01-01
It is generally assumed that prosodic cues that provide linguistic information, like discourse status, are driven primarily by the information structure of the conversation. This article investigates whether speakers have the capacity to adjust subtle acoustic-phonetic properties of the prosodic signal when they find themselves in contexts in…
Syntactic Predictability in the Recognition of Carefully and Casually Produced Speech
ERIC Educational Resources Information Center
Viebahn, Malte C.; Ernestus, Mirjam; McQueen, James M.
2015-01-01
The present study investigated whether the recognition of spoken words is influenced by how predictable they are given their syntactic context and whether listeners assign more weight to syntactic predictability when acoustic-phonetic information is less reliable. Syntactic predictability was manipulated by varying the word order of past…
Nonword Repetition in Children and Adults: Effects on Movement Coordination
ERIC Educational Resources Information Center
Sasisekaran, Jayanthi; Smith, Anne; Sadagopan, Neeraja; Weber-Fox, Christine
2010-01-01
Hearing and repeating novel phonetic sequences, or novel nonwords, is a task that taps many levels of processing, including auditory decoding, phonological processing, working memory, speech motor planning and execution. Investigations of nonword repetition abilities have been framed within models of psycholinguistic processing, while the motor…
Collaborative Projects: A Study of Paired Work in a Malaysian University.
ERIC Educational Resources Information Center
Holmes, Richard
2003-01-01
Examines the project work of university students in a TESOL (Teaching of English as a Second Language) program in Malaysia. Compares phonetics and phonology projects completed by students working in pairs with those completed by students alone and reports student attitudes and strategies. (Author/LRW)
What a Nonnative Speaker of English Needs to Learn through Listening.
ERIC Educational Resources Information Center
Bohlken, Robert; Macias, Lori
Teaching nonnative speakers of English to listen for the discriminating nuances of the language is an important but neglected aspect of American English language training. A discriminating listening process follows a sequence of distinguishing phonemes, supra segmental phonemes, morphemes, and syntax. Certain phonetic differences can be noted…
Variation and Change in Northern Bavarian Quantity
ERIC Educational Resources Information Center
Drake, Derek
2013-01-01
This dissertation presents new research on the "Bavarian Quantity Law" (the BQL) in the northern Bavarian dialect of Hahnbach. Building upon earlier investigation of the BQL (cf. Bannert 1976a,b for Central Bavarian) this study examines the historical, phonological, and phonetic motivations for this feature as well the variability in its…
Apraxia of Speech: The Effectiveness of a Treatment Regimen.
ERIC Educational Resources Information Center
Dworkin, James Paul; And Others
1988-01-01
A treatment program is described which successfully improved the speech of a 57-year-old apraxic patient. The program was composed of physiologic (nonspeech) and phonetic (articulatory) tasks that began with oroneuromotor control activities and progressed to consonant-vowel syllable, word, and sentence drills, with all activities paced by a…
The Effects of Phonetic Similarity and List Length on Children's Sound Categorization Performance.
ERIC Educational Resources Information Center
Snowling, Margaret J.; And Others
1994-01-01
Examined the phonological analysis and verbal working memory components of the sound categorization task and their relationships to reading skill differences. Children were tested on sound categorization by having them identify odd words in sequences. Sound categorization performance was sensitive to individual differences in speech perception…
Phonetics and Phonology of Thematic Contrast in German
ERIC Educational Resources Information Center
Braun, Bettina
2006-01-01
It is acknowledged that contrast plays an important role in understanding discourse and information structure. While it is commonly assumed that contrast can be marked by intonation only, our understanding of the intonational realization of contrast is limited. For German there is mainly introspective evidence that the rising theme accent (or…
Perception and Acoustic Correlates of the Taiwanese Tone Sandhi Group
ERIC Educational Resources Information Center
Kuo, Chen-Hsiu
2013-01-01
This dissertation investigates how the Taiwanese Tone Sandhi Groups are perceived, and the acoustic/phonetics correlates of listeners' judgments. A series of perception experiments have been conducted to scrutinize the following topics--Taiwanese tone neutralization, Tone Sandhi Group (TSG) as a prosodic domain, perceived boundary strength in…
ERIC Educational Resources Information Center
Maciejewski, Anthony A.; Leung, Nelson K.
1992-01-01
The Nihongo Tutorial System is designed to assist English-speaking scientists and engineers in acquiring reading proficiency in Japanese technical literature. It provides individualized lessons that match interest area/language ability with available materials that are encoded with syntactic, phonetic, and morphological information. (14…
Linguistic Parameters in Performance Models.
ERIC Educational Resources Information Center
Mansell, Philip
This paper deals with problems concerning the nature of the input to a phonetic processor. Several assumptions provide the basis for consideration of the problem. There is a phonological level of processing which reflects the sound structure of the language; the rules associated with it are not affected by variables associated either with the…
Frequency Effects in Second Language Acquisition: An Annotated Survey
ERIC Educational Resources Information Center
Kartal, Galip; Sarigul, Ece
2017-01-01
The aim of this study is to investigate the relationship between frequency and language acquisition from many perspectives including implicit and explicit instruction, frequency effects on morpheme acquisition in L2, the relationship between frequency and multi-word constructions, frequency effects on phonetics, vocabulary, gerund and infinitive…
Morphophonemic Transfer in English Second Language Learners
ERIC Educational Resources Information Center
Ping, Sze Wei; Rickard Liow, Susan J.
2011-01-01
Malay (Rumi) is alphabetic and has a transparent, agglutinative system of affixation. We manipulated language-specific junctural phonetics in Malay and English to investigate whether morphophonemic L1-knowledge influences L2-processing. A morpheme decision task, "Does this "nonword" sound like a mono- or bi-morphemic English word?", was developed…
ERIC Educational Resources Information Center
McCracken, Chelsea Leigh
2012-01-01
This dissertation is a description of the grammar of Belep [yly], an Austronesian language variety spoken by about 1600 people in and around the Belep Isles in New Caledonia. The grammar begins with a summary of the cultural and linguistic background of Belep speakers, followed by chapters on Belep phonology and phonetics, morphology and word…
Combining Formal and Functional Approaches to Topic Structure
ERIC Educational Resources Information Center
Zellers, Margaret; Post, Brechtje
2012-01-01
Fragmentation between formal and functional approaches to prosodic variation is an ongoing problem in linguistic research. In particular, the frameworks of the Phonetics of Talk-in-Interaction (PTI) and Empirical Phonology (EP) take very different theoretical and methodological approaches to this kind of variation. We argue that it is fruitful to…
Geolinguistic Diffusion and the U.S.-Canada Border.
ERIC Educational Resources Information Center
Boberg, Charles
2000-01-01
Uses data from both sides of the U.S.-Canada border to test a model regarding the way language changes diffuse over space. Two cases are examined: the non-diffusion of phonetic features from Detroit to Windsor and the gradual infiltration into Canadian English of American foreign (a) pronunciations. (Author/VWL)
Talker Identification across Source Mechanisms: Experiments with Laryngeal and Electrolarynx Speech
ERIC Educational Resources Information Center
Perrachione, Tyler K.; Stepp, Cara E.; Hillman, Robert E.; Wong, Patrick C. M.
2014-01-01
Purpose: The purpose of this study was to determine listeners' ability to learn talker identity from speech produced with an electrolarynx, explore source and filter differentiation in talker identification, and describe acoustic-phonetic changes associated with electrolarynx use. Method: Healthy adult control listeners learned to identify…
Templates in Early Phonological Development
ERIC Educational Resources Information Center
Sowers-Wills, Sara
2017-01-01
Child language data are notoriously noisy. Children may produce several phonetic variants for a given word or use the same forms for several different words. As such, child data are characterized by little apparent systematicity. Competing theories have arisen to account for a range of problematic phenomena, but each has struggled to relate child…
The Effectiveness of Clear Speech as a Masker
ERIC Educational Resources Information Center
Calandruccio, Lauren; Van Engen, Kristin; Dhar, Sumitrajit; Bradlow, Ann R.
2010-01-01
Purpose: It is established that speaking clearly is an effective means of enhancing intelligibility. Because any signal-processing scheme modeled after known acoustic-phonetic features of clear speech will likely affect both target and competing speech, it is important to understand how speech recognition is affected when a competing speech signal…
ERIC Educational Resources Information Center
Barbara, Leila, Ed.; Rajagopalan, Kanavillil, Ed.
1999-01-01
These issues include the following articles: "Portuguese Philology in Brazil" (Heitor Megale, Cesar Nardelli Cambraia); "Implications of Brazilian Portuguese Data for Current Controversies in Phonetics: Towards Sharpening Articulatory Phonology" (Eleonora Cavalconte Albano); "Morphological Studies in Brazil: Data and…
Prenuclear Accentuation in English: Phonetics, Phonology, Information Structure
ERIC Educational Resources Information Center
Bishop, Jason Brandon
2013-01-01
A primary function of prosody in many languages is to convey information structure--the "packaging" of a sentence's content into categories such as "focus", "given" and "topic". In English and other West Germanic languages it is widely assumed that focus is signaled prosodically by the location of a…
The Riggs Institute: What We Teach.
ERIC Educational Resources Information Center
McCulloch, Myrna
Phonetic content/handwriting instruction begins by teaching the sounds of, and letter formation for the 70 "Orton" phonograms which are the commonly-used correct spelling patterns for the 45 sounds of English speech. The purpose for teaching the sound/symbol relationship first in isolation, without key words or pictures (explicitly), is to give…
A HANDBOOK FOR LITERACY TEACHERS.
ERIC Educational Resources Information Center
MCKILLIAM, K.R.
THE METHODS DESCRIBED IN THIS HANDBOOK CAN BE ADAPTED FOR USE IN ANY LANGUAGE WHICH CAN BE WRITTEN PHONETICALLY. CHAPTERS COVER THE VALUE OF ADULT LITERACY, HISTORY OF THE ALPHABET, HISTORY OF METHODS OF TEACHING READING AND WRITING, PRINCIPLES OF TEACHING, SOUNDS AS SYMBOLS, LESSON CONSTRUCTION, LETTER CONSTRUCTION, THE METHOD OF TEACHING…
ANNUAL REPORT-AUTOMATIC INDEXING AND ABSTRACTING.
ERIC Educational Resources Information Center
Lockheed Missiles and Space Co., Palo Alto, CA. Electronic Sciences Lab.
THE INVESTIGATION IS CONCERNED WITH THE DEVELOPMENT OF AUTOMATIC INDEXING, ABSTRACTING, AND EXTRACTING SYSTEMS. BASIC INVESTIGATIONS IN ENGLISH MORPHOLOGY, PHONETICS, AND SYNTAX ARE PURSUED AS NECESSARY MEANS TO THIS END. IN THE FIRST SECTION THE THEORY AND DESIGN OF THE "SENTENCE DICTIONARY" EXPERIMENT IN AUTOMATIC EXTRACTION IS OUTLINED. SOME OF…
English Intonation and Computerized Speech Synthesis. Technical Report No. 287.
ERIC Educational Resources Information Center
Levine, Arvin
This work treats some of the important issues encountered in an attempt to synthesize natural sounding English speech from arbitrary written text. Details of the systems that interact in producing speech are described. The principal systems dealt with are phonology (intonation), phonetics, syntax, semantics, and text-view (discourse). Technical…
Nursery Rhyme Knowledge and Phonological Awareness in Preschool Children
ERIC Educational Resources Information Center
Harper, Laurie J.
2011-01-01
Phonological awareness is an important precursor in learning to read. This awareness of phonemes fosters a child's ability to hear and blend sounds, encode and decode words, and to spell phonetically. This quantitative study assessed pre-K children's existing Euro-American nursery rhyme knowledge and phonological awareness literacy, provided…
ERIC Educational Resources Information Center
No, Yongkyoon, Ed.; Libucha, Mark, Ed.
Papers include: "Length and Structure Effects in Syntactic Processing"; Nantong Tone Sandhi and Tonal Feature Geometry"; "Event Reference and Property Theory"; "Function-Argument Structure, Category Raising and Bracketing Paradoxes"; "At the Phonetics-Phonology Interface: (Re)Syllabification and English Stop…
Mimicking Accented Speech as L2 Phonological Awareness
ERIC Educational Resources Information Center
Mora, Joan C.; Rochdi, Youssef; Kivistö-de Souza, Hanna
2014-01-01
This study investigated Spanish-speaking learners' awareness of a non-distinctive phonetic difference between Spanish and English through a delayed mimicry paradigm. We assessed learners' speech production accuracy through voice onset time (VOT) duration measures in word-initial pre-vocalic /p t k/ in Spanish and English words, and in Spanish…
Spanish-English Speech Perception in Children and Adults: Developmental Trends
ERIC Educational Resources Information Center
Brice, Alejandro E.; Gorman, Brenda K.; Leung, Cynthia B.
2013-01-01
This study explored the developmental trends and phonetic category formation in bilingual children and adults. Participants included 30 fluent Spanish-English bilingual children, aged 8-11, and bilingual adults, aged 18-40. All completed gating tasks that incorporated code-mixed Spanish-English stimuli. There were significant differences in…
"Jaja" in Spoken German: Managing Knowledge Expectations
ERIC Educational Resources Information Center
Taleghani-Nikazm, Carmen; Golato, Andrea
2016-01-01
In line with the other contributions to this issue on teaching pragmatics, this paper provides teachers of German with a two-day lesson plan for integrating authentic spoken language and its associated cultural background into their teaching. Specifically, the paper discusses how "jaja" and its phonetic variants are systematically used…
Perception of English Intonation by English, Spanish, and Chinese Listeners
ERIC Educational Resources Information Center
Grabe, Esther; Rosner, Burton S.; Garcia-Albea, Jose E.; Zhou, Xiaolin
2003-01-01
Native language affects the perception of segmental phonetic structure, of stress, and of semantic and pragmatic effects of intonation. Similarly, native language might influence the perception of similarities and differences among intonation contours. To test this hypothesis, a cross-language experiment was conducted. An English utterance was…
ERIC Educational Resources Information Center
Snyder, Sarah
A booklet for limited English speakers on buying furnishings provides information on what to do before going shopping for furniture, how to make a selection, and how to pay for the acquisition. Cartoons, questions about the message in cartoons and narrative passages, checklists on things to evaluate, and the phonetic pronunciation of key words are…
Flaws in Commercial Reading Materials.
ERIC Educational Resources Information Center
Axelrod, Jerome
Three flaws found in commercial reading materials, such as workbooks and kits, are discussed in this paper, and examples of the flaws are taken from specific materials. The first problem noted is that illustrations frequently provide the information that the learner is supposed to supply through phonetic or structural analysis; the illustrations…
The Influence of Phonetic Dimensions on Aphasic Speech Perception
ERIC Educational Resources Information Center
Hessler, Dorte; Jonkers, Roel; Bastiaanse, Roelien
2010-01-01
Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with "audiovisual", "auditory only" and "visual only" stimulus display. Subjects had to…
/l/ Production in English-Arabic Bilingual Speakers.
ERIC Educational Resources Information Center
Khattab, Ghada
2002-01-01
Reports an analysis of /l/ production by English-Arabic bilingual children. Addresses the question of whether the bilingual develops one phonological system or two by calling for a refinement of the notion of system using insights from recent phonetic and sociolinguistic work on variability in speech. English-Arabic bilinguals were studied.…
Imitation of Para-Phonological Detail Following Left Hemisphere Lesions
ERIC Educational Resources Information Center
Kappes, Juliane; Baumgaertner, Annette; Peschke, Claudia; Goldenberg, Georg; Ziegler, Wolfram
2010-01-01
Imitation in speech refers to the unintentional transfer of phonologically irrelevant acoustic-phonetic information of auditory input into speech motor output. Evidence for such imitation effects has been explained within the framework of episodic theories. However, it is largely unclear, which neural structures mediate speech imitation and how…
Electrophysiological Indices of Phonological Impairments in Dyslexia
ERIC Educational Resources Information Center
Desroches, Amy S.; Newman, Randy Lynn; Robertson, Erin K.; Joanisse, Marc F.
2013-01-01
Purpose: A range of studies have shown difficulties in perceiving acoustic and phonetic information in dyslexia; however, much less is known about how such difficulties relate to the perception of individual words. The authors present data from event-related potentials (ERPs) examining the hypothesis that children with dyslexia have difficulties…
Speech Synthesis Applied to Language Teaching.
ERIC Educational Resources Information Center
Sherwood, Bruce
1981-01-01
The experimental addition of speech output to computer-based Esperanto lessons using speech synthesized from text is described. Because of Esperanto's phonetic spelling and simple rhythm, it is particularly easy to describe the mechanisms of Esperanto synthesis. Attention is directed to how the text-to-speech conversion is performed and the ways…
The New Unabridged English-Persian Dictionary.
ERIC Educational Resources Information Center
Aryanpur, Abbas; Saleh, Jahan Shah
This five-volume English-Persian dictionary is based on Webster's International Dictionary (1960 and 1961) and The Shorter Oxford English Dictionary (1959); it attempts to provide Persian equivalents of all the words of Oxford and all the key-words of Webster. Pronunciation keys for the English phonetic transcription and for the difficult Persian…
Prosodic Encoding in Silent Reading.
ERIC Educational Resources Information Center
Wilkenfeld, Deborah
In silent reading, short-memory tasks, such as semantic and syntactic processing, require a stage of phonetic encoding between visual representation and the actual extraction of meaning, and this encoding includes prosodic as well as segmental features. To test for this suprasegmental coding, an experiment was conducted in which subjects were…
[Language Manual II: Sesotho].
ERIC Educational Resources Information Center
Peace Corps (Lesotho).
This instructional guide for Sesotho (spoken in several areas of Africa by about 6 million people) is designed for the training of Peace Corps volunteers in Africa. The first two chapters outline Sesotho phonology (phonetics, articulation, and speech sounds and patterns not present in English) and tone and length, grammatical structure (class and…
A Bemba Grammar with Exercises.
ERIC Educational Resources Information Center
Hoch, Ernst
This Bemba grammar begins with an introduction which traces the history of the language, stresses the importance of learning it well and offers hints towards achieving this goal. The grammar itself is divided into three major sections: Part 1, "Phonetics," deals with the Bemba alphabet, tonality, and orthography; Part 2, "Parts of Speech,"…
Shaping Speech Patterns via Predictability and Recoverability
ERIC Educational Resources Information Center
Whang, James Doh Yeon
2017-01-01
Recoverability refers to the ease of recovering the underlying form--stored mental representations--given a surface form--actual, variable output signals s (e.g., [Daet^, Daet[superscript h] ] ? /Daet/ "that"). Recovery can be achieved from phonetic cues explicitly present in the acoustic signal or through prediction from the context.…
Perceived Foreign Accent: Extended Stays Abroad, Level of Instruction, and Motivation
ERIC Educational Resources Information Center
Martinsen, Rob A.; Alvord, Scott M.; Tanner, Joshua
2014-01-01
Studies have examined various factors that affect pronunciation including phonetic context, style variation, first language transfer, and experience abroad. A plethora of research has also linked motivation to higher levels of proficiency in the second language. The present study uses native speaker ratings and multiple regression analysis to…
The Structure of Phonological Theory
ERIC Educational Resources Information Center
Samuels, Bridget D.
2009-01-01
This dissertation takes a Minimalist approach to phonology, treating the phonological module as a system of abstract symbolic computation, divorced from phonetic content. I investigate the position of the phonological module within the architecture of grammar and the evolutionary scenario developed by Hauser et al. (2002a) and Fitch et al. (2005).…
Statistical Learning as a Key to Cracking Chinese Orthographic Codes
ERIC Educational Resources Information Center
He, Xinjie; Tong, Xiuli
2017-01-01
This study examines statistical learning as a mechanism for Chinese orthographic learning among children in Grades 3-5. Using an artificial orthography, children were repeatedly exposed to positional, phonetic, and semantic regularities of radicals. Children showed statistical learning of all three regularities. Regularities' levels of consistency…
Interlanguage Phonology: Acquisition of Timing Control in Japanese.
ERIC Educational Resources Information Center
Toda, Takako
1994-01-01
Studies the acquisition of timing control by Australians enrolled in first-year Japanese. Instrumental techniques are used to observe segment duration and pitch patterns in the speech production of learners and native speakers. Results indicate the learners can control timing, but their phonetic realization differs from that of native speakers.…
Establishing Vocal Verbalizations in Mute Mongoloid Children.
ERIC Educational Resources Information Center
Buddenhagen, Ronald G.
Behavior modification as an attack upon the problem of mutism in mongoloid children establishes the basis of the text. Case histories of four children in a state institution present the specific strategy of speech therapy using verbal conditioning. Imitation and attending behavior, verbal chaining, phonetic theory, social reinforcement,…
ERIC Educational Resources Information Center
Lovegren, Jesse Stuart James
2013-01-01
This dissertation is an attempt to state what is known at present about the grammar of Mungbam (ISO 693-3 [mij]). Mungbam is a Niger-Congo language spoken in the Northwest Region of Cameroon. The dissertation is a descriptive grammar, covering the phonetics, phonology morphology and syntax of the language. Source data are texts and elicited data…
Electrophysiological Responses to Coarticulatory and Word Level Miscues
ERIC Educational Resources Information Center
Archibald, Lisa M. D.; Joanisse, Marc F.
2011-01-01
The influence of coarticulation cues on spoken word recognition is not yet well understood. This acoustic/phonetic variation may be processed early and recognized as sensory noise to be stripped away, or it may influence processing at a later prelexical stage. The present study used event-related potentials (ERPs) in a picture/spoken word matching…
Keep Listening: Grammatical Context Reduces but Does Not Eliminate Activation of Unexpected Words
ERIC Educational Resources Information Center
Strand, Julia F.; Brown, Violet A.; Brown, Hunter E.; Berg, Jeffrey J.
2018-01-01
To understand spoken language, listeners combine acoustic-phonetic input with expectations derived from context (Dahan & Magnuson, 2006). Eye-tracking studies on semantic context have demonstrated that the activation levels of competing lexical candidates depend on the relative strengths of the bottom-up input and top-down expectations (cf.…
ERIC Educational Resources Information Center
Bang, Hye-Young; Clayards, Meghan; Goad, Heather
2017-01-01
Purpose: The developmental trajectory of English /s/ was investigated to determine the extent to which children's speech productions are acoustically fine-grained. Given the hypothesis that young children have adultlike phonetic knowledge of /s/, the following were examined: (a) whether this knowledge manifests itself in acoustic spectra that…
Phonological Differentiation before Age Two in a Tagalog-Spanish-English Trilingual Child
ERIC Educational Resources Information Center
Montanari, Simona
2011-01-01
This study focuses on a trilingual toddler's ability to differentiate her Tagalog, Spanish and English productions on phonological/phonetic grounds. Working within the articulatory phonology framework, the word-initial segments produced by the child in Tagalog, Spanish and English words at age 1;10 were narrowly transcribed by two researchers and…
ERIC Educational Resources Information Center
Feldman, David
1975-01-01
This paper discusses the prerequisites to programed language instruction, the role of the native language and the level of skill, and then explains materials and machines needed for such a program. Particular attention is given to phonetics. (Text is in Spanish.) (CK)
Structural Influences on Initial Accent Placement in French
ERIC Educational Resources Information Center
Astesano, Corine; Bard, Ellen Gurman; Turk, Alice
2007-01-01
In addition to the phrase-final accent (FA), the French phonological system includes a phonetically distinct Initial Accent (IA). The present study tested two proposals: that IA marks the onset of phonological phrases, and that it has an independent rhythmic function. Eight adult native speakers of French were instructed to read syntactically…
Tracking Citations: A Science Detective Story
ERIC Educational Resources Information Center
Chirkina, Galina V.; Grigorenko, Elena L.
2014-01-01
The earliest hypothesis concerning the phonetic-phonological roots of reading and writing learning disabilities is usually attributed to Boder in the U.S. literature. Yet by following a trail of references to work in psychology and education conducted some 30 years earlier in the USSR, we find the seeds of this idea already well established in the…
Proficient Readers' Reading Behavior in Taiwan: The Study of Young Chinese Readers
ERIC Educational Resources Information Center
Chang, Li-Chun
2015-01-01
The purpose of this study was to explore the reading behavior of young proficient Chinese readers at preschool age. Especially, the roles of phonetic skill and Chinese Character recognition in reading comprehension were explored. 10 kindergartens were recruited to participate in the study. Subjects were 72-98 kindergarten children. Instruments…
The Unified Phonetic Transcription for Teaching and Learning Chinese Languages
ERIC Educational Resources Information Center
Shieh, Jiann-Cherng
2011-01-01
In order to preserve distinctive cultures, people anxiously figure out writing systems of their languages as recording tools. Mandarin, Taiwanese and Hakka languages are three major and the most popular dialects of Han languages spoken in Chinese society. Their writing systems are all in Han characters. Various and independent phonetic…
A Grammar of Buem, the Lelemi Language.
ERIC Educational Resources Information Center
Allan, Edward Jay
A detailed grammar of Buem, one of the Togo-Remnant Languages spoken in Ghana's Volta region, describes the major structures and many minor structures occurring in informal and semi-formal speech. The phonetics and much of the phonology are described in taxonomic terms, and the vowel harmony system, syntax, and morphology are described in a…
ERIC Educational Resources Information Center
Richgels, Donald J.
2004-01-01
Forty-three years ago, Bloomfield and Barnhart (1961) published "Let's Read: A Linguistic Approach." Their notion of linguistics-informed literacy instruction was to carefully control the vocabulary of texts in order to exploit phonetic regularities. In Lesson 4 students read, "A man at bat had a tan cap" (p. 63). Even as far along as Lesson…
Linguistic Significance of Babbling: Evidence from a Tracheostomized Infant.
ERIC Educational Resources Information Center
Locke, John L.; Pearson, Dawn M.
1990-01-01
Examines the phonetic patterns and linguistic development of an infant who was tracheostomized during the period that infants normally begin to produce syllabic vocalization. It was found that the infant had developed only a tenth of the canonical syllables expected in normally developing infants, a small inventory of consonant-like segments, and…
Electrophysiological Evidence of Phonetic Normalization across Coarticulation in Infants
ERIC Educational Resources Information Center
Mersad, Karima; Dehaene-Lambertz, Ghislaine
2016-01-01
The auditory neural representations of infants can easily be studied with electroencephalography using mismatch experimental designs. We recorded high-density event-related potentials while 3-month-old infants were listening to trials consisting of CV syllables produced with different vowels (/bX/ or /gX/). The consonant remained the same for the…
Phonological and Phonetic Evidence for Trochaic Metrical Structure in Standard Chinese
ERIC Educational Resources Information Center
Sui, Yanyan
2013-01-01
Native speakers of Standard Chinese have significant difficulty judging the prominence of words with tones in a consistent way. How then can metrical structure in the language be diagnosed? This study approaches the question by investigating how metrical structure interacts with other aspects of phonology, especially tone; what foot type is used…
Using Design Principles to Consider Representation of the Hand in Some Notation Systems
ERIC Educational Resources Information Center
Hochgesang, Julie A.
2014-01-01
Linguists have long recognized the descriptive limitations of Stokoe notation, currently the most commonly used system for phonetic or phonological transcription, but continue using it because of its widespread influence (e.g., Siedlecki and Bonvillian, 2000). With the emergence of newer notation systems, the field will benefit from a discussion…
Features and Natural Classes in ASL Handshapes
ERIC Educational Resources Information Center
Whitworth, Cecily
2011-01-01
This article argues for the necessity of phonetic analysis in signed language linguistics and presents a case study of one analytical system being used in a preliminary attempt to identify natural classes and investigate variation in ASL handshapes. Robbin Battison (1978) first described what is now a widely accepted list of basic handshapes,…
Phonics Is Phonics Is Phonics--Or Is It?
ERIC Educational Resources Information Center
McCulloch, Myrna T.
For 60 years, confusion and misinformation have reigned supreme whenever the subject of teaching phonics comes up for discussion. The paper considers various phonics programs, both old and new, and appraises their effectiveness. It also discusses works on phonetics by some well-known researchers and experts in reading, among them Frank Smith,…
ERIC Educational Resources Information Center
Boissonneault, Chantal, Ed.
Papers on language research include: "L'expression de l'opposition en Latin" ("The Expression of Opposition in Latin" (Claude Begin); "Le francais de l'Abitibi: characteristiques phonetiques et origine socio-geographique des locuteurs" ("The French of Abitibi: Phonetic Characteristics and Socio-Geographic Origin…
Malaysian English: An Instrumental Analysis of Vowel Contrasts
ERIC Educational Resources Information Center
Pillai, Stefanie; Don, Zuraidah Mohd.; Knowles, Gerald; Tang, Jennifer
2010-01-01
This paper makes an instrumental analysis of English vowel monophthongs produced by 47 female Malaysian speakers. The focus is on the distribution of Malaysian English vowels in the vowel space, and the extent to which there is phonetic contrast between traditionally paired vowels. The results indicate that, like neighbouring varieties of English,…
Pitch Processing in Tonal-Language-Speaking Children with Autism: An Event-Related Potential Study
ERIC Educational Resources Information Center
Yu, Luodi; Fan, Yuebo; Deng, Zhizhou; Huang, Dan; Wang, Suiping; Zhang, Yang
2015-01-01
The present study investigated pitch processing in Mandarin-speaking children with autism using event-related potential measures. Two experiments were designed to test how acoustic, phonetic and semantic properties of the stimuli contributed to the neural responses for pitch change detection and involuntary attentional orienting. In comparison…
The Intonation and Signaling of Declarative Questions in Manchego Peninsular Spanish
ERIC Educational Resources Information Center
Henriksen, Nicholas C.
2012-01-01
This paper is an experimental investigation on the tonal structure and phonetic signaling of declarative questions by speakers of Manchego Peninsular Spanish, a dialect of Spanish for which little experimental research on intonation is currently available. Analysis 1 examines the scaling and timing properties of final rises produced by 16 speakers…
Using Sign Language in Your Classroom.
ERIC Educational Resources Information Center
Lawrence, Constance D.
This paper reviews the research on use of American Sign Language in elementary classes that do not include children with hearing impairment and also reports on the use of the manual sign language alphabet in a primary class learning the phonetic sounds of the alphabet. The research reported is overwhelmingly positive in support of using sign…
Context Effects in the Processing of Phonolexical Ambiguity in L2
ERIC Educational Resources Information Center
Chrabaszcz, Anna; Gor, Kira
2014-01-01
In order to comprehend speech, listeners have to combine low-level phonetic information about the incoming auditory signal with higher-order contextual information to make a lexical selection. This requires stable phonological categories and unambiguous representations of words in the mental lexicon. Unlike native speakers, second language (L2)…
Infant Learning Is Influenced by Local Spurious Generalizations
ERIC Educational Resources Information Center
Gerken, LouAnn; Quam, Carolyn
2017-01-01
In previous work, 11-month-old infants were able to learn rules about the relation of the consonants in CVCV words from just four examples. The rules involved phonetic feature relations (same voicing or same place of articulation), and infants' learning was impeded when pairs of words allowed alternative possible generalizations (e.g. two words…
The Female-to-Male Transsexual Voice: Physiology vs. Performance in Production
ERIC Educational Resources Information Center
Papp, Viktoria
2011-01-01
Results of the three studies on the speech production of female-to-male transgender individuals (transmen) present phonetic evidence that speech produces the transmen by what I termed triple decoupling. Transmen successfully decouple gender from biological sex. The results of the longitudinal studies exemplified that speakers born and raised…
"Pygmalion": A Study of Socio-Semantics
ERIC Educational Resources Information Center
Anugerahwati, Mirjam
2010-01-01
This article discusses the novel "Pygmalion" by George Bernard Shaw (1957) which depicts Eliza, a flower girl from East London, who became the subject of an "experiment" by a Professor of Phonetics who vowed to change the way she spoke. The story is an excellent example of a very real and contextual portrait of how language,…
Learning General Phonological Rules from Distributional Information: A Computational Model
ERIC Educational Resources Information Center
Calamaro, Shira; Jarosz, Gaja
2015-01-01
Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony…
English Speech Acquisition in 3- to 5-Year-Old Children Learning Russian and English
ERIC Educational Resources Information Center
Gildersleeve-Neumann, Christina E.; Wright, Kira L.
2010-01-01
Purpose: English speech acquisition in Russian-English (RE) bilingual children was investigated, exploring the effects of Russian phonetic and phonological properties on English single-word productions. Russian has more complex consonants and clusters and a smaller vowel inventory than English. Method: One hundred thirty-seven single-word samples…
Initial Teaching Orthographies.
ERIC Educational Resources Information Center
Dewey, Godfrey
To achieve its purpose, an initial teaching orthography (i.t.o.) should be as simple in form and substance as possible; it should be phonemic rather than phonetic. The 40 sounds distinguished by Pitmanic shorthand and some provision for schwa can serve as a basic code. The symbols can be derived from either of two major sources--standardizing the…
Children's Dictionary of Occupations. Third Edition.
ERIC Educational Resources Information Center
Parramore, Barbara M.; Hopke, William E.; Drier, Harry N.
About 300 job titles are listed and defined with an illustration of a child working in each one in this specialized dictionary. Approximate phonetic pronunciations are given. Both girls and boys of various racial or ethnic backgrounds are used in the illustrations. Discussion of the world of work, getting a job, kinds of jobs, careers, and the…
Stimulability: Relationships to Other Characteristics of Children's Phonological Systems
ERIC Educational Resources Information Center
Tyler, Ann A.; Macrae, Toby
2010-01-01
In honour of Miccio's memory, this article revisits the topic of stimulability in children with speech sound disorders (SSD). Eighteen children with SSD, aged 3;6-5;5 (M = 4;8), were tested for their system-wide stimulability, percentage consonants correct (PCC), phonetic inventory size, and oral- and speech-motor skills. Pearson Product Moment…
ERIC Educational Resources Information Center
Emerich, Giang Huong
2012-01-01
In this dissertation, I provide a new analysis of the Vietnamese vowel system as a system with fourteen monophthongs and nineteen diphthongs based on phonetic and phonological data. I propose that these Vietnamese contour vowels - /ie/, /[turned m]?/ and /uo/-should be grouped with these eleven monophthongs /i e epsilon a [turned a] ? ? [turned m]…
ERIC Educational Resources Information Center
Munson, Benjamin
2007-01-01
Previous studies have shown that a subset of gay, lesbian, and bisexual (GLB) and heterosexual adults produce distinctive patterns of phonetic variation that allow listeners to detect their sexual orientation from audio-only samples of read speech. The current investigation examined the extent to which judgments of sexual orientation from speech…
ERIC Educational Resources Information Center
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2011-01-01
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich,…
The General Phonetic Characteristics of Languages. Final Report-1967-1968.
ERIC Educational Resources Information Center
Delattre, Pierre
In this final stage of a series of three linguistic studies conducted at the University of California, Santa Barbara, four topics are presented. The longest is a study of consonant gemination in German, Spanish, French, and American English from acoustic, perceptual, and radiographic points of view. Pharyngeal features are studied in the…
Clinical Application of the Mean Babbling Level and Syllable Structure Level
ERIC Educational Resources Information Center
Morris, Sherrill R.
2010-01-01
Purpose: This clinical exchange reviews two independent phonological assessment measures: mean babbling level (MBL) and syllable structure level (SSL). Both measures summarize phonetic inventory and syllable shape in a calculated average and have been used in research to describe the phonological abilities of children ages 9 to 36 months. An…
Kick the Ball or Kicked the Ball? Perception of the Past Morpheme "-ed" by Second Language Learners
ERIC Educational Resources Information Center
Bell, Philippa; Trofimovich, Pavel; Collins, Laura
2015-01-01
Explanations for the well-documented second language (L2) learning challenge of the English regular past include verb semantics (Bardovi-Harlig, 2000), phonetic properties (Goad, White, & Steele, 2003), and frequency factors (Collins, Trofimovich, White, Cardoso, & Horst, 2009). Difficulty perceiving past-tense morphology (i.e., hearing…
Piecing Together Phonics and Whole Language: A Balanced Approach.
ERIC Educational Resources Information Center
Pernai, Karen; Pulciani, Jodie; Vahle, Heather
The purpose of this study was to determine the effectiveness of the implementation of the Hello Reader Scholastic Phonics program as an addition to an already rich, literature-based curriculum. Test data suggested that primary grade students were not developing phonetic skills sufficient to meet district expectations. This research was designed to…
ERIC Educational Resources Information Center
Qian, Manman; Chukharev-Hudilainen, Evgeny; Levis, John
2018-01-01
Many types of L2 phonological perception are often difficult to acquire without instruction. These difficulties with perception may also be related to intelligibility in production. Instruction on perception contrasts is more likely to be successful with the use of phonetically variable input made available through computer-assisted pronunciation…
On the Psychological Reality of Underlying Phonological Representations.
ERIC Educational Resources Information Center
Trammell, Robert L.
In "The Sound Pattern of English," Chomsky and Halle maintain that the phonetic representation of most words can be generated from underlying forms and a small set of rules. Since these underlying forms are frequently close to the traditional spelling, we may hypothesize that literate native speakers share comparable internalized rules which…
Technologies for the Study of Speech: Review and an Application
ERIC Educational Resources Information Center
Babatsouli, Elena
2015-01-01
Technologies used for the study of speech are classified here into non-intrusive and intrusive. The paper informs on current non-intrusive technologies that are used for linguistic investigations of the speech signal, both phonological and phonetic. Providing a point of reference, the review covers existing technological advances in language…
Audiovisual Speech Recalibration in Children
ERIC Educational Resources Information Center
van Linden, Sabine; Vroomen, Jean
2008-01-01
In order to examine whether children adjust their phonetic speech categories, children of two age groups, five-year-olds and eight-year-olds, were exposed to a video of a face saying /aba/ or /ada/ accompanied by an auditory ambiguous speech sound halfway between /b/ and /d/. The effect of exposure to these audiovisual stimuli was measured on…
ERIC Educational Resources Information Center
Creel, Sarah C.
2012-01-01
Recent research has considered the phonological specificity of children's word representations, but few studies have examined the flexibility of those representations. Tolerating acoustic-phonetic deviations has been viewed as a negative in terms of discriminating minimally different word forms, but may be a positive in an increasingly…
Generative Phonology in the Clinic. CLCS Occasional Paper No. 10.
ERIC Educational Resources Information Center
Kallen, Jeffrey L.
A discussion of the use of generative phonology in the speech clinic, especially with children, begins with an outline of some constructs of generative phonology. First, some notes on phonetic notation and definitions of terms used in nongenerative phonology that have special meanings in this field are presented. Then a discussion of distinctive…
The Emergence of Phonetic Categories in Korean-English Bilingual Children
ERIC Educational Resources Information Center
Lee, Sue Ann S.; Iverson, Gregory K.
2017-01-01
The present study examined the speech production of three-year-old Korean-English bilingual (KEB) children. English and Korean stops, as well as front vowels in both languages, were compared acoustically among the KEB children, then also measured against those of their age-equivalent monolingual counterparts. Evidence of distinctive phonetic…
The Birth and Growth of a Scientific Journal
ERIC Educational Resources Information Center
Kent, Raymond D.
2011-01-01
"Clinical Linguistics & Phonetics (CLP)" and its namesake field have accomplished a great deal in the last quarter of a century. The success of the journal parallels the growth and vitality of the field it represents. The markers of journal achievement are several, including increased number of journal pages published annually; greater diversity…
Neural Coding of Relational Invariance in Speech: Human Language Analogs to the Barn Owl.
ERIC Educational Resources Information Center
Sussman, Harvey M.
1989-01-01
The neuronal model shown to code sound-source azimuth in the barn owl by H. Wagner et al. in 1987 is used as the basis for a speculative brain-based human model, which can establish contrastive phonetic categories to solve the problem of perception "non-invariance." (SLD)
ERIC Educational Resources Information Center
Martin, Maureen K; Wright, Lindsay Elizabeth; Perry, Susan; Cornett, Daphne; Schraeder, Missy; Johnson, James T.
2016-01-01
Research into intervention strategies for developmental verbal dyspraxia (DVD) clearly demonstrates the need to identify effective interventions. The goals of this study were to examine changes in articulation skills following the use of phonetic, multimodal intervention and to consider the relationship between these improved articulation skills…
Letters and American Literacy.
ERIC Educational Resources Information Center
Kevis, David E.
The work itself should help a person who is going to teach reading and writing. Practical suggestions are offered in the final two chapters, while the opening three give intellectual perspectives. A theme binds the work of letting the consciousness of writing as a visual system be increased and of breaking the spell by which letter phonetics can…
Topics in Ho Morphophonology and Morphosyntax
ERIC Educational Resources Information Center
Pucilowski, Anna
2013-01-01
Ho, an under-documented North Munda language of India, is known for its complex verb forms. This dissertation focuses on analysis of several features of those complex verbs, using data from original fieldwork undertaken by the author. By way of background, an analysis of the phonetics, phonology and morphophonology of Ho is first presented. Ho has…
ERIC Educational Resources Information Center
Williams, Jennifer S.
2012-01-01
In 2011, a small Midwestern school district referred an increasing number of 2nd-4th grade students, with reading problems due to phonetic and phonological awareness deficits, to the district's intervention team. Framed in Shulman's pedagogical content knowledge model and the International Dyslexia Association's phonological deficit theory of…
ERIC Educational Resources Information Center
So, Connie K.; Best, Catherine T.
2014-01-01
This study examined how native speakers of Australian English and French, nontone languages with different lexical stress properties, perceived Mandarin tones in a sentence environment according to their native sentence intonation categories (i-Categories) in connected speech. Results showed that both English and French speakers categorized…
Speech Perception Deficits in Poor Readers: Auditory Processing or Phonological Coding?
ERIC Educational Resources Information Center
Mody, Maria; And Others
1997-01-01
Forty second-graders, 20 good and 20 poor readers, completed a /ba/-/da/ temporal order judgment (TOJ) task. The groups did not differ in TOJ when /ba/ and /da/ were paired with more easily discriminated syllables. Poor readers' difficulties with /ba/-/da/ reflected perceptual confusion between phonetically similar syllables rather than difficulty…
A Procedure for the Computerized Analysis of Cleft Palate Speech Transcription
ERIC Educational Resources Information Center
Fitzsimons, David A.; Jones, David L.; Barton, Belinda; North, Kathryn N.
2012-01-01
The phonetic symbols used by speech-language pathologists to transcribe speech contain underlying hexadecimal values used by computers to correctly display and process transcription data. This study aimed to develop a procedure to utilise these values as the basis for subsequent computerized analysis of cleft palate speech. A computer keyboard…