ERIC Educational Resources Information Center
Medwetsky, Larry
2011-01-01
Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…
Mani, Nivedita; Huettig, Falk
2014-10-01
Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.
Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study
ERIC Educational Resources Information Center
Moeller, Aleidine J.; Theiler, Janine
2014-01-01
Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…
On-Line Orthographic Influences on Spoken Language in a Semantic Task
ERIC Educational Resources Information Center
Pattamadilok, Chotiga; Perre, Laetitia; Dufau, Stephane; Ziegler, Johannes C.
2009-01-01
Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a…
Lexical Processing in Spanish Sign Language (LSE)
ERIC Educational Resources Information Center
Carreiras, Manuel; Gutierrez-Sigut, Eva; Baquero, Silvia; Corina, David
2008-01-01
Lexical access is concerned with how the spoken or visual input of language is projected onto the mental representations of lexical forms. To date, most theories of lexical access have been based almost exclusively on studies of spoken languages and/or orthographic representations of spoken languages. Relatively few studies have examined how…
Relationships between Lexical Processing Speed, Language Skills, and Autistic Traits in Children
ERIC Educational Resources Information Center
Abrigo, Erin
2012-01-01
According to current models of spoken word recognition listeners understand speech as it unfolds over time. Eye tracking provides a non-invasive, on-line method to monitor attention, providing insight into the processing of spoken language. In the current project a spoken lexical processing assessment (LPA) confirmed current theories of spoken…
The Temporal Structure of Spoken Language Understanding.
ERIC Educational Resources Information Center
Marslen-Wilson, William; Tyler, Lorraine Komisarjevsky
1980-01-01
An investigation of word-by-word time-course of spoken language understanding focused on word recognition and structural and interpretative processes. Results supported an online interactive language processing theory, in which lexical, structural, and interpretative knowledge sources communicate and interact during processing efficiently and…
The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing
ERIC Educational Resources Information Center
Gow, David W., Jr.
2012-01-01
Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…
Henry, Maya L; Beeson, Pélagie M; Alexander, Gene E; Rapcsak, Steven Z
2012-02-01
Connectionist theories of language propose that written language deficits arise as a result of damage to semantic and phonological systems that also support spoken language production and comprehension, a view referred to as the "primary systems" hypothesis. The objective of the current study was to evaluate the primary systems account in a mixed group of individuals with primary progressive aphasia (PPA) by investigating the relation between measures of nonorthographic semantic and phonological processing and written language performance and by examining whether common patterns of cortical atrophy underlie impairments in spoken versus written language domains. Individuals with PPA and healthy controls were administered a language battery, including assessments of semantics, phonology, reading, and spelling. Voxel-based morphometry was used to examine the relation between gray matter volumes and language measures within brain regions previously implicated in semantic and phonological processing. In accordance with the primary systems account, our findings indicate that spoken language performance is strongly predictive of reading/spelling profile in individuals with PPA and suggest that common networks of critical left hemisphere regions support central semantic and phonological processes recruited for spoken and written language.
Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.
Huettig, Falk; Brouwer, Susanne
2015-05-01
It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.
Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension
ERIC Educational Resources Information Center
Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica
2015-01-01
When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…
Language as a multimodal phenomenon: implications for language learning, processing and evolution
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
2014-01-01
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. PMID:25092660
Using Language Sample Analysis to Assess Spoken Language Production in Adolescents
ERIC Educational Resources Information Center
Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann
2016-01-01
Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…
L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm
ERIC Educational Resources Information Center
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
2016-01-01
The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…
Language as a multimodal phenomenon: implications for language learning, processing and evolution.
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
2014-09-19
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
ERIC Educational Resources Information Center
Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony
2013-01-01
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…
Individual differences in online spoken word recognition: Implications for SLI
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
2012-01-01
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014
ERIC Educational Resources Information Center
Conway, Christopher M.; Karpicke, Jennifer; Pisoni, David B.
2007-01-01
Spoken language consists of a complex, sequentially arrayed signal that contains patterns that can be described in terms of statistical relations among language units. Previous research has suggested that a domain-general ability to learn structured sequential patterns may underlie language acquisition. To test this prediction, we examined the…
Action and object word writing in a case of bilingual aphasia.
Kambanaros, Maria; Messinis, Lambros; Anyfantis, Emmanouil
2012-01-01
We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.
Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).
ERIC Educational Resources Information Center
Pisoni, David B.
This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…
ERIC Educational Resources Information Center
Li, Xiao-qing; Ren, Gui-qin
2012-01-01
An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…
The Bilingual Language Interaction Network for Comprehension of Speech*
Marian, Viorica
2013-01-01
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602
Speech perception and spoken word recognition: past and present.
Jusezyk, Peter W; Luce, Paul A
2002-02-01
The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.
Spoken word recognition by Latino children learning Spanish as their first language*
HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE
2010-01-01
Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157
The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing
Gow, David W.
2012-01-01
Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237
ERIC Educational Resources Information Center
Rodd, Jennifer M.; Longe, Olivia A.; Randall, Billi; Tyler, Lorraine K.
2010-01-01
Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities…
Inferring Speaker Affect in Spoken Natural Language Communication
ERIC Educational Resources Information Center
Pon-Barry, Heather Roberta
2013-01-01
The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…
Overlapping Networks Engaged during Spoken Language Production and Its Cognitive Control
Wise, Richard J.S.; Mehta, Amrish; Leech, Robert
2014-01-01
Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and “rest,” to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. PMID:24966373
Overlapping networks engaged during spoken language production and its cognitive control.
Geranmayeh, Fatemeh; Wise, Richard J S; Mehta, Amrish; Leech, Robert
2014-06-25
Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and "rest," to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. Copyright © 2014 Geranmayeh et al.
Infant perceptual development for faces and spoken words: An integrated approach
Watson, Tamara L; Robbins, Rachel A; Best, Catherine T
2014-01-01
There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626
Orthographic effects in spoken word recognition: Evidence from Chinese.
Qu, Qingqing; Damian, Markus F
2017-06-01
Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.
Brain Bases of Morphological Processing in Young Children
Arredondo, Maria M.; Ip, Ka I; Hsu, Lucy Shih-Ju; Tardif, Twila; Kovelman, Ioulia
2017-01-01
How does the developing brain support the transition from spoken language to print? Two spoken language abilities form the initial base of child literacy across languages: knowledge of language sounds (phonology) and knowledge of the smallest units that carry meaning (morphology). While phonology has received much attention from the field, the brain mechanisms that support morphological competence for learning to read remain largely unknown. In the present study, young English-speaking children completed an auditory morphological awareness task behaviorally (n = 69, ages 6–12) and in fMRI (n = 16). The data revealed two findings: First, children with better morphological abilities showed greater activation in left temporo-parietal regions previously thought to be important for supporting phonological reading skills, suggesting that this region supports multiple language abilities for successful reading acquisition. Second, children showed activation in left frontal regions previously found active in young Chinese readers, suggesting morphological processes for reading acquisition might be similar across languages. These findings offer new insights for developing a comprehensive model of how spoken language abilities support children’s reading acquisition across languages. PMID:25930011
ERIC Educational Resources Information Center
Minaabad, Malahat Shabani
2011-01-01
Translation is the process to transfer written or spoken source language (SL) texts to equivalent written or spoken target language (TL) texts. Translation studies (TS) relies so heavily on a concept of meaning, that one may claim that there is no TS without any reference to meanings. People's understanding of the meaning of sentences is far more…
Cognitive aging and hearing acuity: modeling spoken language comprehension.
Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda
2015-01-01
The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.
On-Line Syntax: Thoughts on the Temporality of Spoken Language
ERIC Educational Resources Information Center
Auer, Peter
2009-01-01
One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…
Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J
2014-09-01
In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.
Williams, Joshua T; Newman, Sharlene D
2017-02-01
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
2015-05-01
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
Direction Asymmetries in Spoken and Signed Language Interpreting
ERIC Educational Resources Information Center
Nicodemus, Brenda; Emmorey, Karen
2013-01-01
Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…
Neural Network Computing and Natural Language Processing.
ERIC Educational Resources Information Center
Borchardt, Frank
1988-01-01
Considers the application of neural network concepts to traditional natural language processing and demonstrates that neural network computing architecture can: (1) learn from actual spoken language; (2) observe rules of pronunciation; and (3) reproduce sounds from the patterns derived by its own processes. (Author/CB)
Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hagiwara, Hiroko
2011-04-01
Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children's native-language acquisition. In a picture-word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n=80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children's onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Crowe, Kathryn; McLeod, Sharynne
2016-01-01
The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…
Spoken Grammar Awareness Raising: Does It Affect the Listening Ability of Iranian EFL Learners?
ERIC Educational Resources Information Center
Rashtchi, Mojgan; Afzali, Mahnaz
2011-01-01
Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and…
Simultaneous Communication Supports Learning in Noise by Cochlear Implant Users
Blom, Helen C.; Marschark, Marc; Machmer, Elizabeth
2017-01-01
Objectives This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant users in noisy contexts. Methods Forty eight college students who were active cochlear implant users, watched videos of three short presentations, the text versions of which were standardized at the 8th grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants’ spoken language and sign language skills were obtained via self-reports and objective assessments. Results When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants’ receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Discussion Students who are cochlear implant users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those cochlear implant users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Conclusion Accompanying spoken language with signs can benefit learners who are cochlear implant users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined. PMID:28010675
Simultaneous communication supports learning in noise by cochlear implant users.
Blom, Helen; Marschark, Marc; Machmer, Elizabeth
2017-01-01
This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant (CI) users in noisy contexts. Forty eight college students who were active CI users, watched videos of three short presentations, the text versions of which were standardized at the 8 th -grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants' spoken language and sign language skills were obtained via self-reports and objective assessments. When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants' receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Students who are CI users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those CI users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Accompanying spoken language with signs can benefit learners who are CI users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined.
Lesion localization of speech comprehension deficits in chronic aphasia
Binder, Jeffrey R.; Humphries, Colin; Gross, William L.; Book, Diane S.
2017-01-01
Objective: Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Methods: Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. Results: ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Conclusions: Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. PMID:28179469
Lesion localization of speech comprehension deficits in chronic aphasia.
Pillay, Sara B; Binder, Jeffrey R; Humphries, Colin; Gross, William L; Book, Diane S
2017-03-07
Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. © 2017 American Academy of Neurology.
Automatic translation among spoken languages
NASA Technical Reports Server (NTRS)
Walter, Sharon M.; Costigan, Kelly
1994-01-01
The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.
Talker familiarity and spoken word recognition in school-age children*
Levi, Susannah V.
2014-01-01
Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers’ voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German–English bilingual talkers and were tested on the speech of six bilinguals, three of whom were familiar. Results revealed that children do show improved spoken language processing when they are familiar with the talkers, but this improvement was limited to highly familiar lexical items. This restriction of the familiar talker advantage is attributed to differences in the representation of highly familiar and less familiar lexical items. In addition, children did not exhibit accent-general learning; despite having been exposed to German-accented talkers during training, there was no improvement for novel German-accented talkers. PMID:25159173
The Bilingual Language Interaction Network for Comprehension of Speech
ERIC Educational Resources Information Center
Shook, Anthony; Marian, Viorica
2013-01-01
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can…
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
2014-01-01
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language. PMID:24904497
Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory
2008-12-01
Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.
Grammatical Processing of Spoken Language in Child and Adult Language Learners
ERIC Educational Resources Information Center
Felser, Claudia; Clahsen, Harald
2009-01-01
This article presents a selective overview of studies that have investigated auditory language processing in children and late second-language (L2) learners using online methods such as event-related potentials (ERPs), eye-movement monitoring, or the cross-modal priming paradigm. Two grammatical phenomena are examined in detail, children's and…
ERIC Educational Resources Information Center
Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.
2017-01-01
Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…
The relation between working memory and language comprehension in signers and speakers.
Emmorey, Karen; Giezen, Marcel R; Petrich, Jennifer A F; Spurgeon, Erin; O'Grady Farnady, Lucinda
2017-06-01
This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hirschfeld, Gerrit; Zwitserlood, Pienie; Dobel, Christian
2011-01-01
We investigated whether and when information conveyed by spoken language impacts on the processing of visually presented objects. In contrast to traditional views, grounded-cognition posits direct links between language comprehension and perceptual processing. We used a magnetoencephalographic cross-modal priming paradigm to disentangle these…
Scaling laws and model of words organization in spoken and written language
NASA Astrophysics Data System (ADS)
Bian, Chunhua; Lin, Ruokuang; Zhang, Xiaoyu; Ma, Qianli D. Y.; Ivanov, Plamen Ch.
2016-01-01
A broad range of complex physical and biological systems exhibits scaling laws. The human language is a complex system of words organization. Studies of written texts have revealed intriguing scaling laws that characterize the frequency of words occurrence, rank of words, and growth in the number of distinct words with text length. While studies have predominantly focused on the language system in its written form, such as books, little attention is given to the structure of spoken language. Here we investigate a database of spoken language transcripts and written texts, and we uncover that words organization in both spoken language and written texts exhibits scaling laws, although with different crossover regimes and scaling exponents. We propose a model that provides insight into words organization in spoken language and written texts, and successfully accounts for all scaling laws empirically observed in both language forms.
ERIC Educational Resources Information Center
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
2015-01-01
Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…
Sources of Difficulty in the Processing of Written Language. Report Series 4.3.
ERIC Educational Resources Information Center
Chafe, Wallace
Ease of language processing varies with the nature of the language involved. Ordinary spoken language is the easiest kind to produce and understand, while writing is a relatively new development. On thoughtful inspection, the readability of writing has shown itself to be a complex topic requiring insights from many academic disciplines and…
Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda
2010-01-01
Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000
Active Learning for Automatic Audio Processing of Unwritten Languages (ALAPUL)
2016-07-01
AFRL-RH-WP-TR-2016-0074 ACTIVE LEARNING FOR AUTOMATIC AUDIO PROCESSING OF UNWRITTEN LANGUAGES (ALAPUL) Dimitra Vergyri Andreas Kathol Wen Wang...June 2015-July 2016 4. TITLE AND SUBTITLE Active Learning for Automatic Audio Processing of Unwritten Languages (ALAPUL) 5a. CONTRACT NUMBER...5430, 27 October 2016 1. SUMMARY The goal of the project was to investigate development of an automatic spoken language processing (ASLP) system
Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts
ERIC Educational Resources Information Center
Office of English Language Acquisition, US Department of Education, 2015
2015-01-01
The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…
ERIC Educational Resources Information Center
van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
2011-01-01
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…
The bridge of iconicity: from a world of experience to the experience of language.
Perniss, Pamela; Vigliocco, Gabriella
2014-09-19
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.
The bridge of iconicity: from a world of experience to the experience of language
Perniss, Pamela; Vigliocco, Gabriella
2014-01-01
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication. PMID:25092668
Language and literacy development of deaf and hard-of-hearing children: successes and challenges.
Lederberg, Amy R; Schick, Brenda; Spencer, Patricia E
2013-01-01
Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to early identification/intervention, advanced technologies (e.g., cochlear implants), and perceptually accessible language models. DHH children develop sign language in a similar manner as hearing children develop spoken language, provided they are in a language-rich environment. This occurs naturally for DHH children of deaf parents, who constitute 5% of the deaf population. For DHH children of hearing parents, sign language development depends on the age that they are exposed to a perceptually accessible 1st language as well as the richness of input. Most DHH children are born to hearing families who have spoken language as a goal, and such development is now feasible for many children. Some DHH children develop spoken language in bilingual (sign-spoken language) contexts. For the majority of DHH children, spoken language development occurs in either auditory-only contexts or with sign supports. Although developmental trajectories of DHH children with hearing parents have improved with early identification and appropriate interventions, the majority of children are still delayed compared with hearing children. These DHH children show particular weaknesses in the development of grammar. Language deficits and differences have cascading effects in language-related areas of development, such as theory of mind and literacy development.
The road to language learning is iconic: evidence from British Sign Language.
Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella
2012-12-01
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo
2016-01-01
The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.
Iconic Factors and Language Word Order
ERIC Educational Resources Information Center
Moeser, Shannon Dawn
1975-01-01
College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)
Individual Differences in Inhibitory Control Relate to Bilingual Spoken Word Processing
ERIC Educational Resources Information Center
Mercier, Julie; Pivneva, Irina; Titone, Debra
2014-01-01
We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., "field") and looked at pictures corresponding to the target, a within-language competitor…
Multiple Languages and the School Curriculum: Experiences from Tanzania
ERIC Educational Resources Information Center
Mushi, Selina Lesiaki Prosper
2012-01-01
This is a research report on children's use of multiple languages and the school curriculum. The study explored factors that trigger use of, and fluency in, multiple languages; and how fluency in multiple languages relates to thought processes and school performance. Advantages and disadvantages of using only one of the languages spoken were…
Teaching Reading through Language. TECHNIQUES.
ERIC Educational Resources Information Center
Jones, Edward V.
1986-01-01
Because reading is first and foremost a language comprehension process focusing on the visual form of spoken language, such teaching strategies as language experience and assisted reading have much to offer beginning readers. These techniques have been slow to become accepted by many adult literacy instructors; however, the two strategies,…
Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension
Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.
2016-01-01
The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974
Eiesland, Eli Anne; Lind, Marianne
2012-03-01
Compounds are words that are made up of at least two other words (lexemes), featuring lexical and syntactic characteristics and thus particularly interesting for the study of language processing. Most studies of compounds and language processing have been based on data from experimental single word production and comprehension tasks. To enhance the ecological validity of morphological processing research, data from other contexts, such as discourse production, need to be considered. This study investigates the production of nominal compounds in semi-spontaneous spoken texts by a group of speakers with fluent types of aphasia compared to a group of neurologically healthy speakers. The speakers with aphasia produce significantly fewer nominal compound types in their texts than the non-aphasic speakers, and the compounds they produce exhibit fewer different types of semantic relations than the compounds produced by the non-aphasic speakers. The results are discussed in relation to theories of language processing.
ERIC Educational Resources Information Center
McArthur, Genevieve; Castles, Anne
2013-01-01
The aim of this study was to determine if phonological processing deficits in specific reading disability (SRD) and specific language impairment (SLI) are the same or different. In four separate analyses, a different combination of reading and spoken language measures was used to divide 73 children into three subgroups: poor readers with average…
Bimodal Bilinguals Co-activate Both Languages during Spoken Comprehension
Shook, Anthony; Marian, Viorica
2012-01-01
Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals’ and English monolinguals’ eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing items than at phonologically unrelated items, and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension. PMID:22770677
Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages
Perniss, Pamela; Thompson, Robin L.; Vigliocco, Gabriella
2010-01-01
Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor, perceptual, and affective experience. PMID:21833282
Research on Spoken Dialogue Systems
NASA Technical Reports Server (NTRS)
Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel
2010-01-01
Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.
Bimodal Bilinguals Co-Activate Both Languages during Spoken Comprehension
ERIC Educational Resources Information Center
Shook, Anthony; Marian, Viorica
2012-01-01
Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are…
Zachau, Swantje; Korpilahti, Pirjo; Hämäläinen, Jarmo A; Ervast, Leena; Heinänen, Kaisu; Suominen, Kalervo; Lehtihalmes, Matti; Leppänen, Paavo H T
2014-07-01
We explored semantic integration mechanisms in native and non-native hearing users of sign language and non-signing controls. Event-related brain potentials (ERPs) were recorded while participants performed a semantic decision task for priming lexeme pairs. Pairs were presented either within speech or across speech and sign language. Target-related ERP responses were subjected to principal component analyses (PCA), and neurocognitive basis of semantic integration processes were assessed by analyzing the N400 and the late positive complex (LPC) components in response to spoken (auditory) and signed (visual) antonymic and unrelated targets. Semantically-related effects triggered across modalities would indicate a similar tight interconnection between the signers׳ two languages like that described for spoken language bilinguals. Remarkable structural similarity of the N400 and LPC components with varying group differences between the spoken and signed targets were found. The LPC was the dominant response. The controls׳ LPC differed from the LPC of the two signing groups. It was reduced to the auditory unrelated targets and was less frontal for all the visual targets. The visual LPC was more broadly distributed in native than non-native signers and was left-lateralized for the unrelated targets in the native hearing signers only. Semantic priming effects were found for the auditory N400 in all groups, but only native hearing signers revealed a clear N400 effect to the visual targets. Surprisingly, the non-native signers revealed no semantically-related processing effect to the visual targets reflected in the N400 or the LPC; instead they appeared to rely more on visual post-lexical analyzing stages than native signers. We conclude that native and non-native signers employed different processing strategies to integrate signed and spoken semantic content. It appeared that the signers׳ semantic processing system was affected by group-specific factors like language background and/or usage. Copyright © 2014 Elsevier Ltd. All rights reserved.
L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
2016-10-01
The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.
ERIC Educational Resources Information Center
Hampton, L. H.; Kaiser, A. P.
2016-01-01
Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…
The Listening and Spoken Language Data Repository: Design and Project Overview
ERIC Educational Resources Information Center
Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.
2018-01-01
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…
ERIC Educational Resources Information Center
Weber-Fox, Christine; Leonard, Laurence B.; Wray, Amanda Hampton; Tomblin, J. Bruce
2010-01-01
Brief tonal stimuli and spoken sentences were utilized to examine whether adolescents (aged 14;3-18;1) with specific language impairments (SLI) exhibit atypical neural activity for rapid auditory processing of non-linguistic stimuli and linguistic processing of verb-agreement and semantic constraints. Further, we examined whether the behavioral…
Cognitive Coordination on the Network Centric Battlefield
2009-03-06
access in spoken language comprehension: Evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic ...Research, Vol 29, 557-580 56 Trueswell, J. & Tanenhaus, M (eds.) (2004). World-situated language use: Psycholinguistic , linguistic, and computational
ERIC Educational Resources Information Center
Mishra, Ramesh Kumar; Singh, Niharika
2014-01-01
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
The Priority of Listening Comprehension over Speaking in the Language Acquisition Process
ERIC Educational Resources Information Center
Xu, Fang
2011-01-01
By elaborating the definition of listening comprehension, the characteristic of spoken discourse, the relationship between STM and LTM and Krashen's comprehensible input, the paper puts forward the point that the priority of listening comprehension over speaking in the language acquisition process is very necessary.
ERIC Educational Resources Information Center
Goldberg, Donald M.; Dickson, Cheryl L.; Flexer, Carol
2010-01-01
This article discusses the AG Bell Academy for Listening and Spoken Language--an organization designed to build capacity of certified Listening and Spoken Language Specialists (LSLS) by defining and maintaining a set of professional standards for LSLS professionals and thereby addressing the global deficit of qualified LSLS. Definitions and…
Method for automatic measurement of second language speaking proficiency
NASA Astrophysics Data System (ADS)
Bernstein, Jared; Balogh, Jennifer
2005-04-01
Spoken language proficiency is intuitively related to effective and efficient communication in spoken interactions. However, it is difficult to derive a reliable estimate of spoken language proficiency by situated elicitation and evaluation of a person's communicative behavior. This paper describes the task structure and scoring logic of a group of fully automatic spoken language proficiency tests (for English, Spanish and Dutch) that are delivered via telephone or Internet. Test items are presented in spoken form and require a spoken response. Each test is automatically-scored and primarily based on short, decontextualized tasks that elicit integrated listening and speaking performances. The tests present several types of tasks to candidates, including sentence repetition, question answering, sentence construction, and story retelling. The spoken responses are scored according to the lexical content of the response and a set of acoustic base measures on segments, words and phrases, which are scaled with IRT methods or parametrically combined to optimize fit to human listener judgments. Most responses are isolated spoken phrases and sentences that are scored according to their linguistic content, their latency, and their fluency and pronunciation. The item development procedures and item norming are described.
NASA Astrophysics Data System (ADS)
Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.
2013-12-01
Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.
7 CFR 253.5 - State agency requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... households which speak the same non-English language and which do not contain adults(s) fluent in English as a second language. If the non-English language is spoken but not written, the State agency shall... sufficient bilingual staff for the timely processing of non-English speaking applicants. (3) The State agency...
"Visual" Cortex Responds to Spoken Language in Blind Children.
Bedny, Marina; Richardson, Hilary; Saxe, Rebecca
2015-08-19
Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.
Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.
Douglas, Michael
2016-02-01
To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently and significantly improve the achievement of children with hearing loss in spoken language skills.
The Arizona Home Language Survey: The Identification of Students for ELL Services
ERIC Educational Resources Information Center
Goldenberg, Claude; Rutherford-Quach, Sara
2010-01-01
Assuring that English language learners (ELLs) receive the services to which they have a right requires accurately identifying those students. Virtually all states identify ELLs in a two-step process. First, parents fill out a home language survey. Second, students in whose homes a language other than English is spoken and who therefore might…
Finding Relevant Data in a Sea of Languages
2016-04-26
full machine-translated text , unbiased word clouds , query-biased word clouds , and query-biased sentence...and information retrieval to automate language processing tasks so that the limited number of linguists available for analyzing text and spoken...the crime (stock market). The Cross-LAnguage Search Engine (CLASE) has already preprocessed the documents, extracting text to identify the language
Modality and morphology: what we write may not be what we say.
Rapp, Brenda; Fischer-Baum, Simon; Miozzo, Michele
2015-06-01
Written language is an evolutionarily recent human invention; consequently, its neural substrates cannot be determined by the genetic code. How, then, does the brain incorporate skills of this type? One possibility is that written language is dependent on evolutionarily older skills, such as spoken language; another is that dedicated substrates develop with expertise. If written language does depend on spoken language, then acquired deficits of spoken and written language should necessarily co-occur. Alternatively, if at least some substrates are dedicated to written language, such deficits may doubly dissociate. We report on 5 individuals with aphasia, documenting a double dissociation in which the production of affixes (e.g., the -ing in jumping) is disrupted in writing but not speaking or vice versa. The findings reveal that written- and spoken-language systems are considerably independent from the standpoint of morpho-orthographic operations. Understanding this independence of the orthographic system in adults has implications for the education and rehabilitation of people with written-language deficits. © The Author(s) 2015.
Horn, David L; Pisoni, David B; Miyamoto, Richard T
2006-08-01
The objective of this study was to assess relations between fine and gross motor development and spoken language processing skills in pediatric cochlear implant users. The authors conducted a retrospective analysis of longitudinal data. Prelingually deaf children who received a cochlear implant before age 5 and had no known developmental delay or cognitive impairment were included in the study. Fine and gross motor development were assessed before implantation using the Vineland Adaptive Behavioral Scales, a standardized parental report of adaptive behavior. Fine and gross motor scores reflected a given child's motor functioning with respect to a normative sample of typically developing, normal-hearing children. Relations between these preimplant scores and postimplant spoken language outcomes were assessed. In general, gross motor scores were found to be positively related to chronologic age, whereas the opposite trend was observed for fine motor scores. Fine motor scores were more strongly correlated with postimplant expressive and receptive language scores than gross motor scores. Our findings suggest a disassociation between fine and gross motor development in prelingually deaf children: fine motor skills, in contrast to gross motor skills, tend to be delayed as the prelingually deaf children get older. These findings provide new knowledge about the links between motor and spoken language development and suggest that auditory deprivation may lead to atypical development of certain motor and language skills that share common cortical processing resources.
ERIC Educational Resources Information Center
Rama, Pia; Sirri, Louah; Serres, Josette
2013-01-01
Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…
A Platform for Multilingual Research in Spoken Dialogue Systems
2000-08-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010384 TITLE: A Platform for Multilingual Research in Spoken Dialogue...Ronald A. Cole*, Ben Serridge§, John-Paul Hosomý, Andrew Cronke, and Ed Kaiser* *Center for Spoken Language Understanding ; University of Colorado...Boulder; Boulder, CO, 80309, USA §Universidad de las Americas; 72820 Santa Catarina Martir; Puebla, Mexico *Center for Spoken Language Understanding (CSLU
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
2014-01-01
Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. PMID:25528091
One grammar or two? Sign Languages and the Nature of Human Language
Lillo-Martin, Diane C; Gajewski, Jon
2014-01-01
Linguistic research has identified abstract properties that seem to be shared by all languages—such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality but also in the form of sign languages has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language—in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression. PMID:25013534
Early Sign Language Exposure and Cochlear Implantation Benefits.
Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S
2017-07-01
Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.
Subarashii: Encounters in Japanese Spoken Language Education.
ERIC Educational Resources Information Center
Bernstein, Jared; Najmi, Amir; Ehsani, Farzad
1999-01-01
Describes Subarashii, an experimental computer-based interactive spoken-language education system designed to understand what a student is saying in Japanese and respond in a meaningful way in spoken Japanese. Implementation of a preprototype version of the Subarashii system identified strengths and limitations of continuous speech recognition…
Building Spoken Language in the First Plane
ERIC Educational Resources Information Center
Bettmann, Joen
2016-01-01
Through a strong Montessori orientation to the parameters of spoken language, Joen Bettmann makes the case for "materializing" spoken knowledge using the stimulation of real objects and real situations that promote mature discussion around the sensorial aspect of the prepared environment. She lists specific materials in the classroom…
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
2012-11-01
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
ERIC Educational Resources Information Center
Woll, Bencie; Morgan, Gary
2012-01-01
Various theories of developmental language impairments have sought to explain these impairments in modality-specific ways--for example, that the language deficits in SLI or Down syndrome arise from impairments in auditory processing. Studies of signers with language impairments, especially those who are bilingual in a spoken language as well as a…
ERIC Educational Resources Information Center
Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.
2012-01-01
The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…
Evaluating Corpus Literacy Training for Pre-Service Language Teachers: Six Case Studies
ERIC Educational Resources Information Center
Heather, Julian; Helt, Marie
2012-01-01
Corpus literacy is the ability to use corpora--large, principled databases of spoken and written language--for language analysis and instruction. While linguists have emphasized the importance of corpus training in teacher preparation programs, few studies have investigated the process of initiating teachers into corpus literacy with the result…
A Shared Platform for Studying Second Language Acquisition
ERIC Educational Resources Information Center
MacWhinney, Brian
2017-01-01
The study of second language acquisition (SLA) can benefit from the same process of datasharing that has proven effective in areas such as first language acquisition and aphasiology. Researchers can work together to construct a shared platform that combines data from spoken and written corpora, online tutors, and Web-based experimentation. Many of…
Language Maintenance and the Deaf Child
ERIC Educational Resources Information Center
Willoughby, Louisa
2012-01-01
For all families with deaf children, choosing communication methods is a complex and evolving business. This process is particularly complex for migrant background families, who must not only negotiate the role that speaking or signing will play in their communication practices, but also which spoken language(s) will be used--that of the host…
Executive Functioning and Speech-Language Skills Following Long-Term Use of Cochlear Implants
ERIC Educational Resources Information Center
Kronenberger, William G.; Colson, Bethany G.; Henning, Shirley C.; Pisoni, David B.
2014-01-01
Neurocognitive processes such as executive functioning (EF) may influence the development of speech-language skills in deaf children after cochlear implantation in ways that differ from normal-hearing, typically developing children. Conversely, spoken language abilities and experiences may also exert reciprocal effects on the development of EF.…
Task-Oriented Spoken Dialog System for Second-Language Learning
ERIC Educational Resources Information Center
Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
2016-01-01
This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…
How long-term memory and accentuation interact during spoken language comprehension.
Li, Xiaoqing; Yang, Yufang
2013-04-01
Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words). Copyright © 2013 Elsevier Ltd. All rights reserved.
Unique Auditory Language-Learning Needs of Hearing-Impaired Children: Implications for Intervention.
ERIC Educational Resources Information Center
Johnson, Barbara Ann; Paterson, Marietta M.
Twenty-seven hearing-impaired young adults with hearing potentially usable for language comprehension and a history of speech language therapy participated in this study of training in using residual hearing for the purpose of learning spoken language. Evaluation of their recalled therapy experiences indicated that listening to spoken language did…
Caselli, Naomi K; Pyers, Jennie E
2017-07-01
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
The Arizona Home Language Survey: The Under-Identification of Students for English Language Services
ERIC Educational Resources Information Center
Goldenberg, Claude; Rutherford-Quach, Sara
2012-01-01
Assuring that English learners (ELs) receive the support services to which they are entitled requires accurately identifying students who are limited in their English proficiency. As a first step in the identification process, students' parents fill out a home language survey. If the survey indicates a language other than English is spoken in the…
ERIC Educational Resources Information Center
Henry, Maya L.; Beeson, Pelagie M.; Alexander, Gene E.; Rapcsak, Steven Z.
2012-01-01
Connectionist theories of language propose that written language deficits arise as a result of damage to semantic and phonological systems that also support spoken language production and comprehension, a view referred to as the "primary systems" hypothesis. The objective of the current study was to evaluate the primary systems account in a mixed…
Fujiwara, Keizo; Naito, Yasushi; Senda, Michio; Mori, Toshiko; Manabe, Tomoko; Shinohara, Shogo; Kikuchi, Masahiro; Hori, Shin-Ya; Tona, Yosuke; Yamazaki, Hiroshi
2008-04-01
The use of fluorodeoxyglucose positron emission tomography (FDG-PET) with a visual language task provided objective information on the development and plasticity of cortical language networks. This approach could help individuals involved in the habilitation and education of prelingually deafened children to decide upon the appropriate mode of communication. To investigate the cortical processing of the visual component of language and the effect of deafness upon this activity. Six prelingually deafened children participated in this study. The subjects were numbered 1-6 in the order of their spoken communication skills. In the time period between an intravenous injection of 370 MBq 18F-FDG and PET scanning of the brain, each subject was instructed to watch a video of the face of a speaking person. The cortical radioactivity of each deaf child was compared with that of a group of normal- hearing adults using a t test in a basic SPM2 model. The widest bilaterally activated cortical area was detected in subject 1, who was the worst user of spoken language. By contrast, there was no significant difference between subject 6, who was the best user of spoken language with a hearing aid, and the normal hearing group.
Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi
2013-01-01
Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.
Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi
2013-01-01
Objectives Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Experimental design Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. Principal findings The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Conclusions Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words. PMID:24391713
Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.
de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo
2016-10-01
Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Massaro, Dominic W
2012-01-01
I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.
An Investigation into the State of Status Planning of Tiv Language of Central Nigeria
ERIC Educational Resources Information Center
Terkimbi, Atonde
2016-01-01
The Tiv language is one of the major languages spoken in central Nigeria. The language is of the Benue-Congo subclass of the Bantu parent family. It has over four million speakers spoken in five states of Nigeria. The language like many other Nigerian languages is in dire need of language planning efforts and strategies. Some previous efforts were…
Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne
2015-09-15
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.
ERIC Educational Resources Information Center
Rietz, Sandra A.
Children will meet one less obstacle to making the transition from spoken to written fluency in language if, during the transition period, they experience written language that corresponds structurally to their spoken language patterns. Familiar children's folksongs, because they contain some of the structure of children's oral language, provide…
Standardization of the Revised Token Test in Bangla
ERIC Educational Resources Information Center
Kumar, Suman; Kumar, Prashant; Kumari, Punam
2013-01-01
Bengali or Bangla is an Indo-Aryan language. It is the state language of West Bengal and Tripura and also spoken in some parts of Assam. Bangla is the official language of Bangladesh. With nearly 230 million speakers (Wikipedia 2010), Bangla is one of the most spoken language in the world. Bangla language is the most commonly used language in West…
On the Conventionalization of Mouth Actions in Australian Sign Language.
Johnston, Trevor; van Roekel, Jane; Schembri, Adam
2016-03-01
This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.
Honoring the Child with Dyslexia in a Montessori Classroom
ERIC Educational Resources Information Center
Skotheim, Meghan Kane
2009-01-01
Speaking, listening, reading, and writing are all language activities. The human capacity for speaking and listening has a biological foundation: wherever there are people, there is spoken language. Acquiring spoken language is an unconscious activity, and, barring any physical deformity or language learning disability, like severe autism, all…
Looking beyond Signed English to Describe the Language of Two Deaf Children.
ERIC Educational Resources Information Center
Suty, Karen A.; Friel-Patti, Sandy
1982-01-01
Examines the spontaneous language of deaf children without forcing the analysis to fit the features of a spoken language system. Suggests linguistic competence of deaf children is commensurate with their cognitive age and is not adequately described by the standard spoken English language tests. (EKN)
Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects
ERIC Educational Resources Information Center
Wiseheart, Rebecca; Altmann, Lori J. P.
2018-01-01
Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…
The Development of Spoken Language in Deaf Children: Explaining the Unexplained Variance.
ERIC Educational Resources Information Center
Musselman, Carol; Kircaali-Iftar, Gonul
1996-01-01
This study compared 20 young deaf children with either exceptionally good or exceptionally poor spoken language for their hearing loss, age, and intelligence. Factors associated with high performance included earlier use of binaural ear-level aids, better educated mothers, auditory/verbal or auditory/oral instruction, reliance on spoken language…
Adaptation and Assessment of a Public Speaking Rating Scale
ERIC Educational Resources Information Center
Iberri-Shea, Gina
2017-01-01
Prominent spoken language assessments such as the Oral Proficiency Interview and the Test of Spoken English have been primarily concerned with speaking ability as it relates to conversation. This paper looks at an additional aspect of spoken language ability, namely public speaking. This study used an adapted form of a public speaking rating scale…
Drop Everything and Write (DEAW): An Innovative Program to Improve Literacy Skills
ERIC Educational Resources Information Center
Joshi, R. Malatesha; Aaron, P. G.; Hill, Nancy; Ocker Dean, Emily; Boulware-Gooden, Regina; Rupley, William H.
2008-01-01
It is believed that language is an innate ability and, therefore, spoken language is acquired naturally and informally. In contrast, written language is thought to be an invention and, therefore, has to be learned through formal instruction. An alternate view, however, is that spoken language and written language are two forms of manifestations of…
Alt, Mary; Gutmann, Michelle L
2009-01-01
This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.
Proform-Antecedent Linking in Listeners with Language Impairments and Unimpaired Listeners
ERIC Educational Resources Information Center
Engel, Samantha Michelle
2016-01-01
This dissertation explores how listeners extract meaning from personal and reflexive pronouns in spoken language. To be understood, words like her and herself must be linked to a prior element in the speech stream (or antecedent). This process draws on syntactic knowledge and verbal working memory processes. I present two original research studies…
ERIC Educational Resources Information Center
Gow, David W., Jr.; Keller, Corey J.; Eskandar, Emad; Meng, Nate; Cash, Sydney S.
2009-01-01
In this work, we apply Granger causality analysis to high spatiotemporal resolution intracranial EEG (iEEG) data to examine how different components of the left perisylvian language network interact during spoken language perception. The specific focus is on the characterization of serial versus parallel processing dependencies in the dominant…
Building a Reference Resolution System Using Human Language Processing for Inspiration
ERIC Educational Resources Information Center
Watters, Shana Kay
2010-01-01
For over 30 years, reference resolution, the process of determining what a noun phrase including a pronoun refers to in written and spoken language, has been an important and on-going area of research. Most existing pronominal reference resolution algorithms and systems are designed to use syntactic information and surface features (e.g. number…
The employment of a spoken language computer applied to an air traffic control task.
NASA Technical Reports Server (NTRS)
Laveson, J. I.; Silver, C. A.
1972-01-01
Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.
ERIC Educational Resources Information Center
Bozorgian, Hossein; Pillay, Hitendra
2013-01-01
Listening used in language teaching refers to a complex process that allows us to understand spoken language. The current study, conducted in Iran with an experimental design, investigated the effectiveness of teaching listening strategies delivered in L1 (Persian) and its effect on listening comprehension in L2. Five listening strategies:…
American or British? L2 Speakers' Recognition and Evaluations of Accent Features in English
ERIC Educational Resources Information Center
Carrie, Erin; McKenzie, Robert M.
2018-01-01
Recent language attitude research has attended to the processes involved in identifying and evaluating spoken language varieties. This article investigates the ability of second-language learners of English in Spain (N = 71) to identify Received Pronunciation (RP) and General American (GenAm) speech and their perceptions of linguistic variation…
ERIC Educational Resources Information Center
Hoog, Brigitte E.; Langereis, Margreet C.; Weerdenburg, Marjolijn; Knoors, Harry E. T.; Verhoeven, Ludo
2016-01-01
Background: The spoken language difficulties of children with moderate or severe to profound hearing loss are mainly related to limited auditory speech perception. However, degraded or filtered auditory input as evidenced in children with cochlear implants (CIs) may result in less efficient or slower language processing as well. To provide insight…
The EpiSLI Database: A Publicly Available Database on Speech and Language
ERIC Educational Resources Information Center
Tomblin, J. Bruce
2010-01-01
Purpose: This article describes a database that was created in the process of conducting a large-scale epidemiologic study of specific language impairment (SLI). As such, this database will be referred to as the EpiSLI database. Children with SLI have unexpected and unexplained difficulties learning and using spoken language. Although there is no…
Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David
2013-12-06
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity/type of hearing disorder, age of identification, and type of hearing technology. This review will provide evidence on the effectiveness of using sign language in combination with oral language therapies for developing spoken language in children with hearing loss who are identified at a young age. The information from this review can provide guidance to parents and intervention specialists, inform policy decisions and provide directions for future research. CRD42013005426.
Spoken Language and Mathematics.
ERIC Educational Resources Information Center
Raiker, Andrea
2002-01-01
States teachers/learners use spoken language in a three part mathematics lesson advocated by the British National Numeracy Strategy. Recognizes language's importance by emphasizing correct use of mathematical vocabulary in raising standards. Finds pupils and teachers appear to ascribe different meanings to scientific words because of their…
ERIC Educational Resources Information Center
Yau, Shu Hui; Brock, Jon; McArthur, Genevieve
2016-01-01
It has been proposed that language impairments in children with Autism Spectrum Disorders (ASD) stem from atypical neural processing of speech and/or nonspeech sounds. However, the strength of this proposal is compromised by the unreliable outcomes of previous studies of speech and nonspeech processing in ASD. The aim of this study was to…
Visual Sonority Modulates Infants' Attraction to Sign Language
ERIC Educational Resources Information Center
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain
2018-01-01
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
Spoken Language Production in Young Adults: Examining Syntactic Complexity
ERIC Educational Resources Information Center
Nippold, Marilyn A.; Frantz-Kaspar, Megan W.; Vigeland, Laura M.
2017-01-01
Purpose: In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language…
The Primacy of Language Mixing: The Effects of a Matrix System.
ERIC Educational Resources Information Center
Field, Fredric
1999-01-01
Focuses on the differences between bilingual mixtures and creoles. In both types of language, elements and structures of two or more distinct languages are intermingled. By contrasting Nahuatl, spoken in Central Mexico, with Palenquero, a Spanish-based creole spoken near the Caribbean coast of Colombia, examines two components of language thought…
Weismer, Susan Ellis
2015-01-01
Purpose Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. Method The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. Results The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. Conclusion The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth. PMID:26254475
ERIC Educational Resources Information Center
Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.
2015-01-01
Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…
ERIC Educational Resources Information Center
Casey, Laura Baylot; Bicard, David F.
2009-01-01
Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…
Evans, Julia L; Gillam, Ronald B; Montgomery, James W
2018-05-10
This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.
ERIC Educational Resources Information Center
Batalova, Jeanne; McHugh, Margie
2010-01-01
While English Language Learner (ELL) students in the United States speak more than 150 languages, Spanish is by far the most common home or first language, but is not the top language spoken by ELLs in every state. This fact sheet, based on analysis of the U.S. Census Bureau's 2009 American Community Survey, documents the top languages spoken…
Linking Language with Embodied and Teleological Representations of Action for Humanoid Cognition
Lallee, Stephane; Madden, Carol; Hoen, Michel; Dominey, Peter Ford
2010-01-01
The current research extends our framework for embodied language and action comprehension to include a teleological representation that allows goal-based reasoning for novel actions. The objective of this work is to implement and demonstrate the advantages of a hybrid, embodied-teleological approach to action–language interaction, both from a theoretical perspective, and via results from human–robot interaction experiments with the iCub robot. We first demonstrate how a framework for embodied language comprehension allows the system to develop a baseline set of representations for processing goal-directed actions such as “take,” “cover,” and “give.” Spoken language and visual perception are input modes for these representations, and the generation of spoken language is the output mode. Moving toward a teleological (goal-based reasoning) approach, a crucial component of the new system is the representation of the subcomponents of these actions, which includes relations between initial enabling states, and final resulting states for these actions. We demonstrate how grammatical categories including causal connectives (e.g., because, if–then) can allow spoken language to enrich the learned set of state-action-state (SAS) representations. We then examine how this enriched SAS inventory enhances the robot's ability to represent perceived actions in which the environment inhibits goal achievement. The paper addresses how language comes to reflect the structure of action, and how it can subsequently be used as an input and output vector for embodied and teleological aspects of action. PMID:20577629
ERIC Educational Resources Information Center
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
2017-01-01
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
ERIC Educational Resources Information Center
Miller, Paul
2013-01-01
This study focuses on similarities and differences in the processing of written text by individuals with prelingual deafness from different reading levels that used Hebrew as their first spoken language and Israeli Sign Language as their primary manual communication mode. Data were gathered from three sources, including (a) a sentence…
ERIC Educational Resources Information Center
Koyalan, Aylin; Mumford, Simon
2011-01-01
The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…
Spoken language development in children following cochlear implantation.
Niparko, John K; Tobey, Emily A; Thal, Donna J; Eisenberg, Laurie S; Wang, Nae-Yuh; Quittner, Alexandra L; Fink, Nancy E
2010-04-21
Cochlear implantation is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe to profound sensorineural hearing loss (SNHL). To prospectively assess spoken language acquisition following cochlear implantation in young children. Prospective, longitudinal, and multidimensional assessment of spoken language development over a 3-year period in children who underwent cochlear implantation before 5 years of age (n = 188) from 6 US centers and hearing children of similar ages (n = 97) from 2 preschools recruited between November 2002 and December 2004. Follow-up completed between November 2005 and May 2008. Performance on measures of spoken language comprehension and expression (Reynell Developmental Language Scales). Children undergoing cochlear implantation showed greater improvement in spoken language performance (10.4; 95% confidence interval [CI], 9.6-11.2 points per year in comprehension; 8.4; 95% CI, 7.8-9.0 in expression) than would be predicted by their preimplantation baseline scores (5.4; 95% CI, 4.1-6.7, comprehension; 5.8; 95% CI, 4.6-7.0, expression), although mean scores were not restored to age-appropriate levels after 3 years. Younger age at cochlear implantation was associated with significantly steeper rate increases in comprehension (1.1; 95% CI, 0.5-1.7 points per year younger) and expression (1.0; 95% CI, 0.6-1.5 points per year younger). Similarly, each 1-year shorter history of hearing deficit was associated with steeper rate increases in comprehension (0.8; 95% CI, 0.2-1.2 points per year shorter) and expression (0.6; 95% CI, 0.2-1.0 points per year shorter). In multivariable analyses, greater residual hearing prior to cochlear implantation, higher ratings of parent-child interactions, and higher socioeconomic status were associated with greater rates of improvement in comprehension and expression. The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their preimplantation scores.
Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying
ERIC Educational Resources Information Center
Barberà, Gemma; Zwets, Martine
2013-01-01
In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…
The Language Development of a Deaf Child with a Cochlear Implant
ERIC Educational Resources Information Center
Mouvet, Kimberley; Matthijs, Liesbeth; Loots, Gerrit; Taverniers, Miriam; Van Herreweghe, Mieke
2013-01-01
Hearing parents of deaf or partially deaf infants are confronted with the complex question of communication with their child. This question is complicated further by conflicting advice on how to address the child: in spoken language only, in spoken language supported by signs, or in signed language. This paper studies the linguistic environment…
Corpus Based Authenicity Analysis of Language Teaching Course Books
ERIC Educational Resources Information Center
Peksoy, Emrah; Harmaoglu, Özhan
2017-01-01
In this study, the resemblance of the language learning course books used in Turkey to authentic language spoken by native speakers is explored by using a corpus-based approach. For this, the 10-million-word spoken part of the British National Corpus was selected as reference corpus. After that, all language learning course books used in high…
Nuffield Early Language Intervention: Evaluation Report and Executive Summary
ERIC Educational Resources Information Center
Sibieta, Luke; Kotecha, Mehul; Skipp, Amy
2016-01-01
The Nuffield Early Language Intervention is designed to improve the spoken language ability of children during the transition from nursery to primary school. It is targeted at children with relatively poor spoken language skills. Three sessions per week are delivered to groups of two to four children starting in the final term of nursery and…
ERIC Educational Resources Information Center
Chambers, Craig G.; Cooke, Hilary
2009-01-01
A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., "Marie va decrire la poule" [Marie will…
ERIC Educational Resources Information Center
Cisilino, William
Rhaeto-Romansh is a Neo-Latin language with three varieties. Occidental Rhaeto-Romansh (Romansh) is spoken in Switzerland, in the Canton of the Grisons. Central Rhaeto-Romansh (Dolomite Ladin) is spoken in some of the Italian Dolomite valleys, in the Province of Belluno, Bozen/Bolzano, and Trento. Oriental Rhaeto-Romansh (Friulian) is spoken in…
Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G
2010-01-01
This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.
Spoken language outcomes after hemispherectomy: factoring in etiology.
Curtiss, S; de Bode, S; Mathern, G W
2001-12-01
We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.
Phonological Stereotypes and Names in Temne.
ERIC Educational Resources Information Center
Nemer, Julie F.
1987-01-01
Many personal names in Temne (a Mel language spoken in Sierra Leone) are borrowed from other languages, containing foreign sounds and sequences which are unpronounceable for Temne speakers when they appear in other words. These exceptions are treated as instances of phonological stereotyping (cases remaining resistant to assimilation processes).…
ERIC Educational Resources Information Center
Andreu, Llorenc; Sanz-Torrent, Monica; Trueswell, John C.
2013-01-01
Twenty-five children with specific language impairment (SLI; age 5 years, 3 months [5;3]-8;2), 50 typically developing children (3;3-8;2), and 31 normal adults participated in three eye-tracking experiments of spoken language comprehension that were designed to investigate the use of verb information during real-time sentence comprehension in…
ERIC Educational Resources Information Center
Grogan, A.; Parker Jones, O.; Ali, N.; Crinion, J.; Orabona, S.; Mechias, M. L.; Ramsden, S.; Green, D. W.; Price, C. J.
2012-01-01
We used structural magnetic resonance imaging (MRI) and voxel based morphometry (VBM) to investigate whether the efficiency of word processing in the non-native language (lexical efficiency) and the number of non-native languages spoken (2+ versus 1) were related to local differences in the brain structure of bilingual and multilingual speakers.…
ERIC Educational Resources Information Center
Mason, Katherine
2006-01-01
In an environment, in which English is a second or other language for every student, fear and anxiety affect students' learning and engagement. Yet, in spite of these concerns, students welcomed the chance to practice their spoken English in cooperative structures while learning about and engaging in their composing processes. English language…
ERIC Educational Resources Information Center
Granger, Sylviane; Kraif, Olivier; Ponton, Claude; Antoniadis, Georges; Zampa, Virginie
2007-01-01
Learner corpora, electronic collections of spoken or written data from foreign language learners, offer unparalleled access to many hitherto uncovered aspects of learner language, particularly in their error-tagged format. This article aims to demonstrate the role that the learner corpus can play in CALL, particularly when used in conjunction with…
Speech-Language Pathologists: Vital Listening and Spoken Language Professionals
ERIC Educational Resources Information Center
Houston, K. Todd; Perigoe, Christina B.
2010-01-01
Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…
Parent Telegraphic Speech Use and Spoken Language in Preschoolers with ASD
ERIC Educational Resources Information Center
Venker, Courtney E.; Bolt, Daniel M.; Meyer, Allison; Sindberg, Heidi; Weismer, Susan Ellis; Tager-Flusberg, Helen
2015-01-01
Purpose: There is considerable controversy regarding whether to use telegraphic or grammatical input when speaking to young children with language delays, including children with autism spectrum disorder (ASD). This study examined telegraphic speech use in parents of preschoolers with ASD and associations with children's spoken language 1 year…
Spoken Language Development in Oral Preschool Children with Permanent Childhood Deafness
ERIC Educational Resources Information Center
Sarant, Julia Z.; Holt, Colleen M.; Dowell, Richard C.; Rickards, Field W.
2009-01-01
This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were…
Acquisition of graphic communication by a young girl without comprehension of spoken language.
von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R
To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.
Bimodal Bilingual Language Development of Hearing Children of Deaf Parents
ERIC Educational Resources Information Center
Hofmann, Kristin; Chilla, Solveig
2015-01-01
Adopting a bimodal bilingual language acquisition model, this qualitative case study is the first in Germany to investigate the spoken and sign language development of hearing children of deaf adults (codas). The spoken language competence of six codas within the age range of 3;10 to 6;4 is assessed by a series of standardised tests (SETK 3-5,…
Spoken Language Development in Children Following Cochlear Implantation
Niparko, John K.; Tobey, Emily A.; Thal, Donna J.; Eisenberg, Laurie S.; Wang, Nae-Yuh; Quittner, Alexandra L.; Fink, Nancy E.
2010-01-01
Context Cochlear implantation (CI) is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe-to-profound sensorineural hearing loss (SNHL). Objective To prospectively assess spoken language acquisition following CI in young children with adjustment of co-variates. Design, Setting, and Participants Prospective, longitudinal, and multidimensional assessment of spoken language growth over a 3-year period following CI. Prospective cohort study of children who underwent CI before 5 years of age (n=188) from 6 US centers and hearing children of similar ages (n=97) from 2 preschools recruited between November, 2002 and December, 2004. Follow-up completed between November, 2005 and May, 2008. Main Outcome Measures Performance on measures of spoken language comprehension and expression. Results Children undergoing CI showed greater growth in spoken language performance (10.4;[95% confidence interval: 9.6–11.2] points/year in comprehension; 8.4;[7.8–9.0] in expression) than would be predicted by their pre-CI baseline scores (5.4;[4.1–6.7] comprehension; 5.8;[4.6–7.0] expression). Although mean scores were not restored to age-appropriate levels after 3 years, significantly greater annual rates of language acquisition were observed in children who were younger at CI (1.1;[0.5–1.7] points in comprehension per year younger; 1.0;[0.6–1.5] in expression), and in children with shorter histories of hearing deficit (0.8;[0.2,1.2] points in comprehension per year shorter; 0.6;[0.2–1.0] for expression). In multivariable analyses, greater residual hearing prior to CI, higher ratings of parent-child interactions, and higher SES associated with greater rates of growth in comprehension and expression. Conclusions The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their pre-implantation scores. However, discrepancies between participants’ chronologic and language age persisted after CI, underscoring the importance of early CI in appropriately selected candidates. PMID:20407059
Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items
ERIC Educational Resources Information Center
Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.
2013-01-01
An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…
User-Centred Design for Chinese-Oriented Spoken English Learning System
ERIC Educational Resources Information Center
Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting
2016-01-01
Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…
Phonological Awareness in Mandarin of Chinese and Americans
ERIC Educational Resources Information Center
Hu, Min
2009-01-01
Phonological awareness (PA) is the ability to analyze spoken language into its component sounds and to manipulate these smaller units. Literature review related to PA shows that a variety of factor groups play a role in PA in Mandarin such as linguistic experience (spoken language, alphabetic literacy, and second language learning), item type,…
A Mother Tongue Spoken Mainly by Fathers.
ERIC Educational Resources Information Center
Corsetti, Renato
1996-01-01
Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…
Understanding Communication among Deaf Students Who Sign and Speak: A Trivial Pursuit?
ERIC Educational Resources Information Center
Marschark, Marc; Convertino, Carol M.; Macias, Gayle; Monikowski, Christine M.; Sapere, Patricia; Seewagen, Rosemarie
2007-01-01
Classroom communication between deaf students was modeled using a question-and-answer game. Participants consisted of student pairs that relied on spoken language, pairs that relied on American Sign Language (ASL), and mixed pairs in which one student used spoken language and one signed. Although the task encouraged students to request…
ERIC Educational Resources Information Center
Naito-Billen, Yuka
2012-01-01
Recently, the significant role that pronunciation and prosody plays in processing spoken language has been widely recognized and a variety of teaching methodologies of pronunciation/prosody has been implemented in teaching foreign languages. Thus, an analysis of how similarly or differently native and L2 learners of a language use…
Hirshorn, Elizabeth A.; Dye, Matthew W. G.; Hauser, Peter; Supalla, Ted R.; Bavelier, Daphne
2015-01-01
While reading is challenging for many deaf individuals, some become proficient readers. Little is known about the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English,’ in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as memory measures that rely differentially on phonological (serial recall) and semantic (free recall) processing, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with free recall being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers. Highlights: 1. Deaf individuals vary in their orthographic and phonological knowledge of English as a function of their language experience. 2. Reading comprehension was best predicted by different factors in oral deaf and deaf native signers. 3. Free recall memory (primacy effect) better predicted reading comprehension in deaf native signers as compared to oral deaf or hearing individuals. 4. Language experience should be taken into account when considering cognitive processes that mediate reading in deaf individuals. PMID:26379566
Neural Processing of Spoken Words in Specific Language Impairment and Dyslexia
ERIC Educational Resources Information Center
Helenius, Paivi; Parviainen, Tiina; Paetau, Ritva; Salmelin, Riitta
2009-01-01
Young adults with a history of specific language impairment (SLI) differ from reading-impaired (dyslexic) individuals in terms of limited vocabulary and poor verbal short-term memory. Phonological short-term memory has been shown to play a significant role in learning new words. We investigated the neural signatures of auditory word recognition…
Eye Movements Reveal the Dynamic Simulation of Speed in Language
ERIC Educational Resources Information Center
Speed, Laura J.; Vigliocco, Gabriella
2014-01-01
This study investigates how speed of motion is processed in language. In three eye-tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., "The lion ambled/dashed to the balloon"). Results showed that looking time to relevant objects in the visual scene was affected…
Lexical access in sign language: a computational model.
Caselli, Naomi K; Cohen-Goldberg, Ariel M
2014-01-01
PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Select Committee on Indian Affairs.
Past U.S. policies toward Indian and other Native American languages have attempted to suppress the use of the languages in government-operated Indian schools for assimilating Indian children. About 155 Native languages are spoken today in the United States, but only 20 are spoken by people of all ages. The Native American Languages Act of 1990…
Hall, Wyatte C
2017-05-01
A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.
ERIC Educational Resources Information Center
Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
2018-01-01
Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…
ERIC Educational Resources Information Center
Al-Nofaie, Haifa
2018-01-01
This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…
Inhibitory control and the speech patterns of second language users.
Korko, Malgorzata; Williams, Simon A
2017-02-01
Inhibitory control (IC), an ability to suppress irrelevant and/or conflicting information, has been found to underlie performance on a variety of cognitive tasks, including bilingual language processing. This study examines the relationship between IC and the speech patterns of second language (L2) users from the perspective of individual differences. While the majority of studies have supported the role of IC in bilingual language processing using single-word production paradigms, this work looks at inhibitory processes in the context of extended speech, with a particular emphasis on disfluencies. We hypothesized that the speech of individuals with poorer IC would be characterized by reduced fluency. A series of regression analyses, in which we controlled for age and L2 proficiency, revealed that IC (in terms of accuracy on the Stroop task) could reliably predict the occurrence of reformulations and the frequency and duration of silent pauses in L2 speech. No statistically significant relationship was found between IC and other L2 spoken output measures, such as repetitions, filled pauses, and performance errors. Conclusions focus on IC as one out of a number of cognitive functions in the service of spoken language production. A more qualitative approach towards the question of whether L2 speakers rely on IC is advocated. © 2016 The British Psychological Society.
Worsfold, Sarah; Mahon, Merle; Pimperton, Hannah; Stevenson, Jim; Kennedy, Colin
2018-06-01
Deaf and hard of hearing (D/HH) children and young people are known to show group-level deficits in spoken language and reading abilities relative to their hearing peers. However, there is little evidence on the longitudinal predictive relationships between language and reading in this population. To determine the extent to which differences in spoken language ability in childhood predict reading ability in D/HH adolescents. and procedures: Participants were drawn from a population-based cohort study and comprised 53 D/HH teenagers, who used spoken language, and a comparison group of 38 normally hearing teenagers. All had completed standardised measures of spoken language (expression and comprehension) and reading (accuracy and comprehension) at 6-10 and 13-19 years of age. and results: Forced entry stepwise regression showed that, after taking reading ability at age 8 years into account, language scores at age 8 years did not add significantly to the prediction of Reading Accuracy z-scores at age 17 years (change in R 2 = 0.01, p = .459) but did make a significant contribution to the prediction of Reading Comprehension z-scores at age 17 years (change in R 2 = 0.17, p < .001). and implications: In D/HH individuals who are spoken language users, expressive and receptive language skills in middle childhood predict reading comprehension ability in adolescence. Continued intervention to support language development beyond primary school has the potential to benefit reading comprehension and hence educational access for D/HH adolescents. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Cochlear implants and spoken language processing abilities: review and assessment of the literature.
Peterson, Nathaniel R; Pisoni, David B; Miyamoto, Richard T
2010-01-01
Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading). However, there is wide variation in individual outcomes following cochlear implantation, and some CI recipients never develop useable speech and oral language skills. The causes of this enormous variation in outcomes are only partly understood at the present time. The variables most strongly associated with language outcomes are age at implantation and mode of communication in rehabilitation. Thus, some of the more important factors determining success of cochlear implantation are broadly related to neural plasticity that appears to be transiently present in deaf individuals. In this article we review the expected outcomes of cochlear implantation, potential predictors of those outcomes, the basic science regarding critical and sensitive periods, and several new research directions in the field of cochlear implantation.
Pronunciation difficulty, temporal regularity, and the speech-to-song illusion.
Margulis, Elizabeth H; Simchy-Gross, Rhimmon; Black, Justin L
2015-01-01
The speech-to-song illusion (Deutsch et al., 2011) tracks the perceptual transformation from speech to song across repetitions of a brief spoken utterance. Because it involves no change in the stimulus itself, but a dramatic change in its perceived affiliation to speech or to music, it presents a unique opportunity to comparatively investigate the processing of language and music. In this study, native English-speaking participants were presented with brief spoken utterances that were subsequently repeated ten times. The utterances were drawn either from languages that are relatively difficult for a native English speaker to pronounce, or languages that are relatively easy for a native English speaker to pronounce. Moreover, the repetition could occur at regular or irregular temporal intervals. Participants rated the utterances before and after the repetitions on a 5-point Likert-like scale ranging from "sounds exactly like speech" to "sounds exactly like singing." The difference in ratings before and after was taken as a measure of the strength of the speech-to-song illusion in each case. The speech-to-song illusion occurred regardless of whether the repetitions were spaced at regular temporal intervals or not; however, it occurred more readily if the utterance was spoken in a language difficult for a native English speaker to pronounce. Speech circuitry seemed more liable to capture native and easy-to-pronounce languages, and more reluctant to relinquish them to perceived song across repetitions.
Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert
2017-03-01
Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and socio-economic background during adolescence, particularly in populations without language impairment. To investigate: (1) whether adolescents with higher educational outcomes overall had higher language abilities; and (2) associations between adolescent language ability, socio-economic background and educational outcomes, specifically in relation to Mathematics, English Language and English Literature GCSE grade. A total of 151 participants completed five standardized language assessments measuring vocabulary, comprehension of sentences and spoken paragraphs, and narrative skills and one nonverbal assessment when between 13 and 14 years old. These data were compared with the participants' educational achievement obtained upon leaving secondary education (16 years old). Univariate logistic regressions were employed to identify those language assessments and demographic factors that were associated with achieving a targeted A * -C grade in English Language, English Literature and Mathematics General Certificate of Secondary Education (GCSE) at 16 years. Further logistic regressions were then conducted to examine further the contribution of socio-economic background and spoken language skills in the multivariate models. Vocabulary, comprehension of sentences and spoken paragraphs, and mean length utterance in a narrative task along with socio-economic background contributed to whether participants achieved an A * -C grade in GCSE Mathematics and English Language and English Literature. Nonverbal ability contributed to English Language and Mathematics. The results of multivariate logistic regressions then found that vocabulary skills were particularly relevant to all three GCSE outcomes. Socio-economic background only remained important for English Language, once language assessment scores and demographic information were considered. Language ability, and in particular vocabulary, plays an important role for educational achievement. Results confirm a need for ongoing support for spoken language ability throughout secondary education and a potential role for speech and language therapy provision in the continuing drive to reduce the gap in educational attainment between groups from differing socio-economic backgrounds. © 2016 Royal College of Speech and Language Therapists.
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
2015-01-01
Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070
System in Black Language. Multilingual Matters Series: 77.
ERIC Educational Resources Information Center
Sutcliffe, David; Figueroa, John
An examination of pattern in certain languages spoken primarily by Blacks has both a narrow and a broad focus. The former is on structure and development of the creole spoken by Jamaicans in England and to a lesser extent, a Black country English. The broader focus is on the relationship between the Kwa languages of West Africa and the…
Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?
ERIC Educational Resources Information Center
Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.
2013-01-01
Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…
ERIC Educational Resources Information Center
Geytenbeek, Joke J. M.; Heim, Margriet J. M.; Knol, Dirk L.; Vermeulen, R. Jeroen; Oostrom, Kim J.
2015-01-01
Background Children with severe cerebral palsy (CP) (i.e. "non-speaking children with severely limited mobility") are restricted in many domains that are important to the acquisition of language. Aims To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. Methods & Procedures…
Examining the Concept of Subordination in Spoken L1 and L2 English: The Case of "If"-Clauses
ERIC Educational Resources Information Center
Basterrechea, María; Weinert, Regina
2017-01-01
This article explores the applications of research on native spoken language into second language learning in the concept of subordination. Second language (L2) learners' ability to integrate subordinate clauses is considered an indication of higher proficiency (e.g., Ellis & Barkhuizen, 2005; Tarone & Swierzbin, 2009). However, the notion…
Pizer, Ginger; Walters, Keith; Meier, Richard P
2013-01-01
Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."
Production Is Only Half the Story - First Words in Two East African Languages.
Alcock, Katherine J
2017-01-01
Theories of early learning of nouns in children's vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8-20 months were interviewed using Communicative Development Inventories that assess infants' first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75-95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children's spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children's comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language.
Spoken language skills and educational placement in Finnish children with cochlear implants.
Lonka, Eila; Hasan, Marja; Komulainen, Erkki
2011-01-01
This study reports the demographics, and the auditory and spoken language development as well as educational settings, for a total of 164 Finnish children with cochlear implants. Two questionnaires were employed: the first, concerning day care and educational placement, was filled in by professionals for rehabilitation guidance, and the second, evaluating language development (categories of auditory performance, spoken language skills, and main mode of communication), by speech and language therapists in audiology departments. Nearly half of the children were enrolled in normal kindergartens and 43% of school-aged children in mainstream schools. Categories of auditory performance were observed to grow in relation to age at cochlear implantation (p < 0.001) as well as in relation to proportional hearing age (p < 0.001). The composite scores for language development moved to more diversified ones in relation to increasing age at cochlear implantation and proportional hearing age (p < 0.001). Children without additional disorders outperformed those with additional disorders. The results indicate that the most favorable age for cochlear implantation could be earlier than 2. Compared to other children, spoken language evaluation scores of those with additional disabilities were significantly lower; however, these children showed gradual improvements in their auditory perception and language scores. Copyright © 2011 S. Karger AG, Basel.
Production Is Only Half the Story — First Words in Two East African Languages
Alcock, Katherine J.
2017-01-01
Theories of early learning of nouns in children’s vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8–20 months were interviewed using Communicative Development Inventories that assess infants’ first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75–95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children’s spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children’s comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language. PMID:29163280
Open-Source Multi-Language Audio Database for Spoken Language Processing Applications
2012-12-01
Mandarin, and Russian . Approximately 30 hours of speech were collected for each language. Each passage has been carefully transcribed at the...manual and automatic methods. The Russian passages have not yet been marked at the phonetic level. Another phase of the work was to explore...You Tube. 300 passages were collected in each of three languages—English, Mandarin, and Russian . Approximately 30 hours of speech were
ERIC Educational Resources Information Center
Brown, Gillian
1981-01-01
Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…
Rapid modulation of spoken word recognition by visual primes.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2016-02-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.
Rapid modulation of spoken word recognition by visual primes
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2015-01-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296
Emotion-Memory Effects in Bilingual Speakers: A Levels-of-Close Processing Approach
ERIC Educational Resources Information Center
Aycicegi-Dinn, Ayse; Caldwell-Harris, Catherine L.
2009-01-01
Emotion-memory effects occur when emotion words are more frequently recalled than neutral words. Bilingual speakers report that taboo terms and emotional phrases generate a stronger emotional response when heard or spoken in their first language. This suggests that the basic emotion-memory will be stronger for words presented in a first language.…
Event Processing in the Visual World: Projected Motion Paths during Spoken Sentence Comprehension
ERIC Educational Resources Information Center
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-01-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the…
Deviant ERP Response to Spoken Non-Words among Adolescents Exposed to Cocaine in Utero
ERIC Educational Resources Information Center
Landi, Nicole; Crowley, Michael J.; Wu, Jia; Bailey, Christopher A.; Mayes, Linda C.
2012-01-01
Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of…
ERIC Educational Resources Information Center
Brumm, Kathleen Patricia
2011-01-01
This project examines spoken language comprehension in Broca's aphasia, a non-fluent language disorder acquired subsequent to stroke. Broca's aphasics demonstrate impaired comprehension for complex sentence constructions. To account for this deficit, one current processing theory claims that Broca's patients retain intrinsic linguistic knowledge,…
Detailed Phonetic Labeling of Multi-language Database for Spoken Language Processing Applications
2015-03-01
which contains about 60 interfering speakers as well as background music in a bar. The top panel is again clean training /noisy testing settings, and...recognition system for Mandarin was developed and tested. Character recognition rates as high as 88% were obtained, using an approximately 40 training ...Tool_ComputeFeat.m) .............................................................................................................. 50 6.3. Training
ERIC Educational Resources Information Center
Troyer, Melissa; Borovsky, Arielle
2017-01-01
In infancy, maternal socioeconomic status (SES) is associated with real-time language processing skills, but whether or not (and if so, how) this relationship carries into adulthood is unknown. We explored the effects of maternal SES in college-aged adults on eye-tracked, spoken sentence comprehension tasks using the visual world paradigm. When…
ERIC Educational Resources Information Center
Shtyrov, Yury; Smith, Marie L.; Horner, Aidan J.; Henson, Richard; Nathan, Pradeep J.; Bullmore, Edward T.; Pulvermuller, Friedemann
2012-01-01
Previous research indicates that, under explicit instructions to listen to spoken stimuli or in speech-oriented behavioural tasks, the brain's responses to senseless pseudowords are larger than those to meaningful words; the reverse is true in non-attended conditions. These differential responses could be used as a tool to trace linguistic…
Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul
2010-01-01
Purpose The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child’s language level in order to establish a framework for comparing outcomes across intervention studies. Method The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. Results The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children’s spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age. PMID:19380608
Glossary of Terms Relating to Languages of the Middle East.
ERIC Educational Resources Information Center
Ferguson, Charles A.
This glossary gives brief, non-technical explanations of the following kinds of terms: (1) names of all important languages now spoken in the Middle East, or known to have been spoken in the area; (2) names of language families represented in the area; (3) descriptive terms used with reference to the writing systems of the area; (4) names of…
The Lightening Veil: Language Revitalization in Wales
ERIC Educational Resources Information Center
Williams, Colin H.
2014-01-01
The Welsh language, which is indigenous to Wales, is one of six Celtic languages. It is spoken by 562,000 speakers, 19% of the population of Wales, according to the 2011 U.K. Census, and it is estimated that it is spoken by a further 200,000 residents elsewhere in the United Kingdom. No exact figures exist for the undoubted thousands of other…
ERIC Educational Resources Information Center
Farfan, Jose Antonio Flores
Even though Nahuatl is the most widely spoken indigenous language in Mexico, it is endangered. Threats include poor support for Nahuatl-speaking communities, migration of Nahuatl speakers to cities where English and Spanish are spoken, prejudicial attitudes toward indigenous languages, lack of contact between small communities of different…
ERIC Educational Resources Information Center
Shaw, Emily P.
2013-01-01
This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…
ERIC Educational Resources Information Center
Taha, Haitham
2017-01-01
The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…
Notes from the Field: Lolak--Another Moribund Language of Indonesia, with Supporting Audio
ERIC Educational Resources Information Center
Lobel, Jason William; Paputungan, Ade Tatak
2017-01-01
This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…
Syntax as a Reflex: Neurophysiological Evidence for Early Automaticity of Grammatical Processing
ERIC Educational Resources Information Center
Pulvermuller, Friedemann; Shtyrov, Yury; Hasting, Anna S.; Carlyon, Robert P.
2008-01-01
It has been a matter of debate whether the specifically human capacity to process syntactic information draws on attentional resources or is automatic. To address this issue, we recorded neurophysiological indicators of syntactic processing to spoken sentences while subjects were distracted to different degrees from language processing. Subjects…
The gender congruency effect during bilingual spoken-word recognition
Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa
2016-01-01
We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132
Advances in natural language processing.
Hirschberg, Julia; Manning, Christopher D
2015-07-17
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.
Automatic processing of pragmatic information in the human brain: a mismatch negativity study.
Zhao, Ming; Liu, Tao; Chen, Feiyan
2018-05-23
Language comprehension involves pragmatic information processing, which allows world knowledge to influence the interpretation of a sentence. This study explored whether pragmatic information can be automatically processed during spoken sentence comprehension. The experiment adopted the mismatch negativity (MMN) paradigm to capture the neurophysiological indicators of automatic processing of spoken sentences. Pragmatically incorrect ('Foxes have wings') and correct ('Butterflies have wings') sentences were used as the experimental stimuli. In condition 1, the pragmatically correct sentence was the deviant and the pragmatically incorrect sentence was the standard stimulus, whereas the opposite case was presented in condition 2. The experimental results showed that, compared with the condition that the pragmatically correct sentence is the deviant stimulus, when the condition that the pragmatically incorrect sentence is the deviant stimulus MMN effects were induced within 60-120 and 220-260 ms. The results indicated that the human brain can monitor for incorrect pragmatic information in the inattentive state and can automatically process pragmatic information at the beginning of spoken sentence comprehension.
La mort d'une langue: le judeo-espagnol (The Death of a Language: The Spanish Spoken by Jews)
ERIC Educational Resources Information Center
Renard, Raymond
1971-01-01
Describes the Sephardic culture which flourished in the Balkans, Ottoman Empire, and North Africa during the Middle Ages. Suggests the use of Ladino", the language of medieval Spain spoken by the expelled Jews. (DS)
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
Tur-Kaspa, Hana; Dromi, Esther
2001-04-01
The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.
Shallow Processing and Attention Capture in Written and Spoken Discourse
ERIC Educational Resources Information Center
Sanford, Alison J. S.; Sanford, Anthony J.; Molle, Jo; Emmott, Catherine
2006-01-01
Processing of discourse seems to be far from uniform with much evidence indicating that it can be quite shallow. The question is then what modulates depth of processing? A range of discourse devices exist that we believe may lead to more detailed processing of language input (Attention Capturers), thus serving as modulators of processing enabling…
TEACHING DEAF CHILDREN TO TALK.
ERIC Educational Resources Information Center
EWING, ALEXANDER; EWING, ETHEL C.
DESIGNED AS A TEXT FOR AUDIOLOGISTS AND TEACHERS OF HEARING IMPAIRED CHILDREN, THIS BOOK PRESENTS BASIC INFORMATION ABOUT SPOKEN LANGUAGE, HEARING, AND LIPREADING. METHODS AND RESULTS OF EVALUATING SPOKEN LANGUAGE OF AURALLY HANDICAPPED CHILDREN WITHOUT USING READING OR WRITING ARE REPORTED. VARIOUS TYPES OF INDIVIDUAL AND GROUP HEARING AIDS ARE…
Language-driven anticipatory eye movements in virtual reality.
Eichert, Nicole; Peeters, David; Hagoort, Peter
2018-06-01
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.
ERIC Educational Resources Information Center
Huang, Li-Shih
2010-01-01
This paper reports on a small-scale study that was the first to explore raising second-language (L2) learners' awareness of speaking strategies as mediated by three modalities of task-specific reflection--individual written reflection, individual spoken reflection, and group spoken reflection. Though research in such areas as L2 writing, teacher's…
ERIC Educational Resources Information Center
Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul
2009-01-01
Purpose: The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken…
Syntactic priming in American Sign Language.
Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I
2015-01-01
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.
Lexical access in sign language: a computational model
Caselli, Naomi K.; Cohen-Goldberg, Ariel M.
2014-01-01
Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition. PMID:24860539
Changes in N400 Topography Following Intensive Speech Language Therapy for Individuals with Aphasia
ERIC Educational Resources Information Center
Wilson, K. Ryan; O'Rourke, Heather; Wozniak, Linda A.; Kostopoulos, Ellina; Marchand, Yannick; Newman, Aaron J.
2012-01-01
Our goal was to characterize the effects of intensive aphasia therapy on the N400, an electrophysiological index of lexical-semantic processing. Immediately before and after 4 weeks of intensive speech-language therapy, people with aphasia performed a task in which they had to determine whether spoken words were a "match" or a "mismatch" to…
ERIC Educational Resources Information Center
Yoon, Sae Yeol
2012-01-01
The purpose of this study was to explore the development of students' understanding through writing while immersed in an environment where there was a strong emphasis on a language-based argument inquiry approach. Additionally, this study explored students' spoken discourse to gain a better understanding of what role(s) talking plays in…
Effects of early auditory experience on the spoken language of deaf children at 3 years of age.
Nicholas, Johanna Grant; Geers, Ann E
2006-06-01
By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44 mo of age). Examination of the independent influence of these predictors through multiple regression analysis revealed that pre-implant-aided PTA threshold and duration of cochlear implant use (i.e., age at implant) accounted for 58% of the variance in Language Factor scores. A significant negative coefficient associated with pre-implant-aided threshold indicated that children with poorer hearing before implantation exhibited poorer language skills at age 3.5 yr. Likewise, a strong positive coefficient associated with duration of implant use indicated that children who had used their implant for a longer period of time (i.e., who were implanted at an earlier age) exhibited better language at age 3.5 yr. Age at identification and amplification was unrelated to language outcome, as was aided threshold with the cochlear implant. A significant quadratic trend in the relation between duration of implant use and language score revealed a steady increase in language skill (at age 3.5 yr) for each additional month of use of a cochlear implant after the first 12 mo of implant use. The advantage to language of longer implant use became more pronounced over time. Longer use of a cochlear implant in infancy and very early childhood dramatically affects the amount of spoken language exhibited by 3-yr-old, profoundly deaf children. In this sample, the amount of pre-implant intervention with a hearing aid was not related to language outcome at 3.5 yr of age. Rather, it was cochlear implantation at a younger age that served to promote spoken language competence. The previously identified language-facilitating factors of early identification of hearing impairment and early educational intervention may not be sufficient for optimizing spoken language of profoundly deaf children unless it leads to early cochlear implantation.
Auditory Frequency Discrimination in Children with Dyslexia
ERIC Educational Resources Information Center
Halliday, Lorna F.; Bishop, Dorothy V. M.
2006-01-01
A popular hypothesis holds that developmental dyslexia is caused by phonological processing problems and is therefore linked to difficulties in the analysis of spoken as well as written language. It has been suggested that these phonological deficits might be attributable to low-level problems in processing the temporal fine structure of auditory…
Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
2011-01-01
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.
ERIC Educational Resources Information Center
Werfel, Krystal L.
2017-01-01
Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…
Strandroos, Lisa; Antelius, Eleonor
2017-09-01
Previous research concerning bilingual people with a dementia disease has mainly focused on the importance of sharing a spoken language with caregivers. While acknowledging this, this article addresses the multidimensional character of communication and interaction. As using spoken language is made difficult as a consequence of the dementia disease, this multidimensionality becomes particularly important. The article is based on a qualitative analysis of ethnographic fieldwork at a dementia care facility. It presents ethnographic examples of different communicative forms, with particular focus on bilingual interactions. Interaction is understood as a collective and collaborative activity. The text finds that a shared spoken language is advantageous, but is not the only source of, nor a guarantee for, creating common ground and understanding. Communicative resources other than spoken language are for example body language, embodiment, artefacts and time. Furthermore, forms of communication are not static but develop, change and are created over time. Ability to communicate is thus not something that one has or has not, but is situationally and collaboratively created. To facilitate this, time and familiarity are central resources, and the results indicate the importance of continuity in interpersonal relations.
Marchman, Virginia A; Loi, Elizabeth C; Adams, Katherine A; Ashland, Melanie; Fernald, Anne; Feldman, Heidi M
2018-04-01
Identifying which preterm (PT) children are at increased risk of language and learning differences increases opportunities for participation in interventions that improve outcomes. Speed in spoken language comprehension at early stages of language development requires information processing skills that may form the foundation for later language and school-relevant skills. In children born full-term, speed of comprehending words in an eye-tracking task at 2 years old predicted language and nonverbal cognition at 8 years old. Here, we explore the extent to which speed of language comprehension at 1.5 years old predicts both verbal and nonverbal outcomes at 4.5 years old in children born PT. Participants were children born PT (n = 47; ≤32 weeks gestation). Children were tested in the "looking-while-listening" task at 18 months old, adjusted for prematurity, to generate a measure of speed of language comprehension. Parent report and direct assessments of language were also administered. Children were later retested on a test battery of school-relevant skills at 4.5 years old. Speed of language comprehension at 18 months old predicted significant unique variance (12%-31%) in receptive vocabulary, global language abilities, and nonverbal intelligence quotient (IQ) at 4.5 years, controlling for socioeconomic status, gestational age, and medical complications of PT birth. Speed of language comprehension remained uniquely predictive (5%-12%) when also controlling for children's language skills at 18 months old. Individual differences in speed of spoken language comprehension may serve as a marker for neuropsychological processes that are critical for the development of school-relevant linguistic skills and nonverbal IQ in children born PT.
Preference for language in early infancy: the human language bias is not speech specific.
Krentz, Ursula C; Corina, David P
2008-01-01
Fundamental to infants' acquisition of their native language is an inherent interest in the language spoken around them over non-linguistic environmental sounds. The following studies explored whether the bias for linguistic signals in hearing infants is specific to speech, or reflects a general bias for all human language, spoken and signed. Results indicate that 6-month-old infants prefer an unfamiliar, visual-gestural language (American Sign Language) over non-linguistic pantomime, but 10-month-olds do not. These data provide evidence against a speech-specific bias in early infancy and provide insights into those properties of human languages that may underlie this language-general attentional bias.
Language and reading development in the brain today: neuromarkers and the case for prediction.
Buchweitz, Augusto
2016-01-01
The goal of this article is to provide an account of language development in the brain using the new information about brain function gleaned from cognitive neuroscience. This account goes beyond describing the association between language and specific brain areas to advocate the possibility of predicting language outcomes using brain-imaging data. The goal is to address the current evidence about language development in the brain and prediction of language outcomes. Recent studies will be discussed in the light of the evidence generated for predicting language outcomes and using new methods of analysis of brain data. The present account of brain behavior will address: (1) the development of a hardwired brain circuit for spoken language; (2) the neural adaptation that follows reading instruction and fosters the "grafting" of visual processing areas of the brain onto the hardwired circuit of spoken language; and (3) the prediction of language development and the possibility of translational neuroscience. Brain imaging has allowed for the identification of neural indices (neuromarkers) that reflect typical and atypical language development; the possibility of predicting risk for language disorders has emerged. A mandate to develop a bridge between neuroscience and health and cognition-related outcomes may pave the way for translational neuroscience. Copyright © 2016 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Social scale and structural complexity in human languages.
Nettle, Daniel
2012-07-05
The complexity of different components of the grammars of human languages can be quantified. For example, languages vary greatly in the size of their phonological inventories, and in the degree to which they make use of inflectional morphology. Recent studies have shown that there are relationships between these types of grammatical complexity and the number of speakers a language has. Languages spoken by large populations have been found to have larger phonological inventories, but simpler morphology, than languages spoken by small populations. The results require further investigation, and, most importantly, the mechanism whereby the social context of learning and use affects the grammatical evolution of a language needs elucidation.
E-cigarette use and disparities by race, citizenship status and language among adolescents.
Alcalá, Héctor E; Albert, Stephanie L; Ortega, Alexander N
2016-06-01
E-cigarette use among adolescents is on the rise in the U.S. However, limited attention has been given to examining the role of race, citizenship status and language spoken at home in shaping e-cigarette use behavior. Data are from the 2014 Adolescent California Health Interview Survey, which interviewed 1052 adolescents ages 12-17. Lifetime e-cigarette use was examined by sociodemographic characteristics. Separate logistic regression models predicted odds of ever-smoking e-cigarettes from race, citizenship status and language spoken at home. Sociodemographic characteristics were then added to these models as control variables and a model with all three predictors and controls was run. Similar models were run with conventional smoking as an outcome. 10.3% of adolescents ever used e-cigarettes. E-cigarette use was higher among ever-smokers of conventional cigarettes, individuals above 200% of the Federal Poverty Level, US citizens and those who spoke English-only at home. Multivariate analyses demonstrated that citizenship status and language spoken at home were associated with lifetime e-cigarette use, after accounting for control variables. Only citizenship status was associated with e-cigarette use, when controls variables race and language spoken at home were all in the same model. Ever use of e-cigarettes in this study was higher than previously reported national estimates. Action is needed to curb the use of e-cigarettes among adolescents. Differences in lifetime e-cigarette use by citizenship status and language spoken at home suggest that less acculturated individuals use e-cigarettes at lower rates. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spoken Word Recognition in Toddlers Who Use Cochlear Implants
Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.
2010-01-01
Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921
Variation in Discourse Strategies in a Multilingual Context
ERIC Educational Resources Information Center
Bai, B. Lakshmi
2010-01-01
This paper is an attempt to study empirically a sample of spoken narratives of Hindi, Telugu and Dakkhini speakers in the multilingual setting of Hyderabad. After a brief account of multilingualism and variation within a language as commonly occurring phenomena, the paper examines the spoken narratives of the three languages mentioned above with a…
Spoken Grammar Practice and Feedback in an ASR-Based CALL System
ERIC Educational Resources Information Center
de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland
2015-01-01
Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…
ERIC Educational Resources Information Center
D'Mello, Sidney K.; Dowell, Nia; Graesser, Arthur
2011-01-01
There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning,…
A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome
ERIC Educational Resources Information Center
McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard
2016-01-01
Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…
ERIC Educational Resources Information Center
Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna
2013-01-01
Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…
Influences of Indigenous Language on Spatial Frames of Reference in Aboriginal English
ERIC Educational Resources Information Center
Edmonds-Wathen, Cris
2014-01-01
The Aboriginal English spoken by Indigenous children in remote communities in the Northern Territory of Australia is influenced by the home languages spoken by themselves and their families. This affects uses of spatial terms used in mathematics such as "in front" and "behind." Speakers of the endangered Indigenous Australian…
ERIC Educational Resources Information Center
Zimmer, Patricia Moore
2001-01-01
Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…
ERIC Educational Resources Information Center
Konnerth, Linda Anna
2014-01-01
Karbi is a Tibeto-Burman (TB) language spoken by half a million people in the Karbi Anglong district in Assam, Northeast India, and surrounding areas in the extended Brahmaputra Valley area. It is an agglutinating, verb-final language. This dissertation offers a description of the dialect spoken in the hills of the Karbi Anglong district. It is…
L2 Gender Facilitation and Inhibition in Spoken Word Recognition
ERIC Educational Resources Information Center
Behney, Jennifer N.
2011-01-01
This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…
Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi
2017-01-01
Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to the classical left-hemisphere language network.
Mastrantuono, Eliana; Saldaña, David; Rodríguez-Ortiz, Isabel R.
2017-01-01
An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf (n = 25) and hearing (n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals – particularly native signers – mainly perceived signs through peripheral vision. PMID:28680416
Mastrantuono, Eliana; Saldaña, David; Rodríguez-Ortiz, Isabel R
2017-01-01
An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf ( n = 25) and hearing ( n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals - particularly native signers - mainly perceived signs through peripheral vision.
Eye movements during spoken word recognition in Russian children.
Sekerina, Irina A; Brooks, Patricia J
2007-09-01
This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing.
Schiff, Rachel; Saiegh-Haddad, Elinor
2018-01-01
This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children's phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children's early morphological awareness in SpA explained variance in children's gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.
Schiff, Rachel; Saiegh-Haddad, Elinor
2018-01-01
This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633
Incremental Parsing with Reference Interaction
2004-07-01
ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Department of...Computer Science,University of Rochester,Rochester,NY,14627 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND...Evidence from eye movements in spoken language comprehen- sion. Conference Abstract. Architechtures and Mechanisms for Language Processing. R. M
ERIC Educational Resources Information Center
Bisconti, Silvia; Shulkin, Masha; Hu, Xiaosu; Basura, Gregory J.; Kileny, Paul R.; Kovelman, Ioulia
2016-01-01
Purpose: The aim of this study was to examine how the brains of individuals with cochlear implants (CIs) respond to spoken language tasks that underlie successful language acquisition and processing. Method: During functional near-infrared spectroscopy imaging, CI recipients with hearing impairment (n = 10, mean age: 52.7 ± 17.3 years) and…
ERIC Educational Resources Information Center
Forteza Fernandez, Rafael Filiberto; Korneeva, Larisa I.
2017-01-01
Based on Selinker's hypothesis of five psycholinguistic processes shaping interlanguage (1972), the paper focuses attention on the Russian L2-learners' overreliance on the L1 as the main factor hindering their development. The research problem is, therefore, the high incidence of L1 transfer in the spoken and written English language output of…
Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study
ERIC Educational Resources Information Center
Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua
2012-01-01
Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…
Kliewer, C
1995-06-01
Interactive and literacy-based language use of young children within the context of an inclusive preschool classroom was explored. An interpretivist framework and qualitative research methods, including participant observation, were used to examine and analyze language in five preschool classes that were composed of children with and without disabilities. Children's language use included spoken, written, signed, and typed. Results showed complex communicative and literacy language use on the part of young children outside conventional adult perspectives. Also, children who used expressive methods other than speech were often left out of the contexts where spoken language was richest and most complex.
Sutton, Ann; Trudeau, Natacha; Morford, Jill; Rios, Monica; Poirier, Marie-Andrée
2010-01-01
Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression (visual). This study explored the ability of three- and four-year-old children without disabilities to perform tasks involving sequences of graphic symbols. Thirty participants were asked to transpose spoken simple sentences into graphic symbols by selecting individual symbols corresponding to the spoken words, and to interpret graphic symbol utterances by selecting one of four photographs corresponding to a sequence of three graphic symbols. The results showed that these were not simple tasks for the participants, and few of them performed in the expected manner - only one in transposition, and only one-third of participants in interpretation. Individual response strategies in some cases lead to contrasting response patterns. Children at this age level have not yet developed the skills required to deal with graphic symbols even though they have mastered the corresponding spoken language structures.
Aging and Cortical Mechanisms of Speech Perception in Noise
ERIC Educational Resources Information Center
Wong, Patrick C. M.; Jin, James Xumin; Gunasekera, Geshri M.; Abel, Rebekah; Lee, Edward R.; Dhar, Sumitrajit
2009-01-01
Spoken language processing in noisy environments, a hallmark of the human brain, is subject to age-related decline, even when peripheral hearing might be intact. The present study examines the cortical cerebral hemodynamics (measured by fMRI) associated with such processing in the aging brain. Younger and older subjects identified single words in…
Strategies of Clarification in Judges' Use of Language: From the Written to the Spoken.
ERIC Educational Resources Information Center
Philips, Susan U.
1985-01-01
Reports on a study of judges' strategies in clarifying their verbal explanations of constitutional rights to criminal defendants. Identifies six clarification processes and compares them with other studies of clarification processes and with the properties of simplified registers, particularly speech addressed to first- and second-language…
Using Unscripted Spoken Texts in the Teaching of Second Language Listening
ERIC Educational Resources Information Center
Wagner, Elvis
2014-01-01
Most spoken texts that are used in second language (L2) listening classroom activities are scripted texts, where the text is written, revised, polished, and then read aloud with artificially clear enunciation and slow rate of speech. This article explores the field's overreliance on these scripted texts, at the expense of including unscripted…
Revisiting Debates on Oracy: Classroom Talk--Moving towards a Democratic Pedagogy?
ERIC Educational Resources Information Center
Coultas, Valerie
2015-01-01
This article uses documentary evidence to review debates on spoken language and learning in the UK over recent decades. It argues that two different models of talk have been at stake: one that wishes to "correct" children's spoken language and another than encourages children to use talk to learn and represent their worlds. The article…
The Contribution of the Inferior Parietal Cortex to Spoken Language Production
ERIC Educational Resources Information Center
Geranmayeh, Fatemeh; Brownsett, Sonia L. E.; Leech, Robert; Beckmann, Christian F.; Woodhead, Zoe; Wise, Richard J. S.
2012-01-01
This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with…
ERIC Educational Resources Information Center
Frimberger, Katja
2016-01-01
This article explores the author's embodied experience of linguistic incompetence in the context of an interview-based, short, promotional film production about people's personal connections to their spoken languages in Glasgow, Scotland/UK. The article highlights that people's right to their spoken languages during film interviews and the…
Professional Training in Listening and Spoken Language--A Canadian Perspective
ERIC Educational Resources Information Center
Fitzpatrick, Elizabeth
2010-01-01
Several factors undoubtedly influenced the development of listening and spoken language options for children with hearing loss in Canada. The concept of providing auditory-based rehabilitation was popularized in Canada in the 1960s through the work of Drs. Daniel Ling and Agnes Ling in Montreal. The Lings founded the McGill University Project for…
Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language
ERIC Educational Resources Information Center
Nicholas, Johanna G.; Geers, Ann E.
2008-01-01
Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…
A Race to Rescue Native Tongues
ERIC Educational Resources Information Center
Ashburn, Elyse
2007-01-01
Of the 300 or so native languages once spoken in North America, only about 150 are still spoken--and the majority of those have just a handful of mostly elderly speakers. For most Native American languages, colleges and universities are their last great hope, if not their final resting place. People at a number of institutions across the country…
Guidelines for Evaluating Auditory-Oral Programs for Children Who Are Hearing Impaired.
ERIC Educational Resources Information Center
Alexander Graham Bell Association for the Deaf, Inc., Washington, DC.
These guidelines are intended to assist parents in evaluating educational programs for children who are hearing impaired, where a program's stated intention is promoting the child's optimal use of spoken language as a mode of everyday communication and learning. The guidelines are applicable to programs where spoken language is the sole mode or…
Beyond Rhyme or Reason: ERPs Reveal Task-Specific Activation of Orthography on Spoken Language
ERIC Educational Resources Information Center
Pattamadilok, Chotiga; Perre, Laetitia; Ziegler, Johannes C.
2011-01-01
Metaphonological tasks, such as rhyme judgment, have been the primary tool for the investigation of the effects of orthographic knowledge on spoken language. However, it has been recently argued that the orthography effect in rhyme judgment does not reflect the automatic activation of orthographic codes but rather stems from sophisticated response…
Effects of Tasks on Spoken Interaction and Motivation in English Language Learners
ERIC Educational Resources Information Center
Carrero Pérez, Nubia Patricia
2016-01-01
Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…
ERIC Educational Resources Information Center
Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.
2012-01-01
This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…
Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network.
Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus
2017-01-01
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.
Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network
Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus
2017-01-01
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension. PMID:29249945
ERIC Educational Resources Information Center
Dang, Thi Ngoc Yen; Coxhead, Averil; Webb, Stuart
2017-01-01
The linguistic features of academic spoken English are different from those of academic written English. Therefore, for this study, an Academic Spoken Word List (ASWL) was developed and validated to help second language (L2) learners enhance their comprehension of academic speech in English-medium universities. The ASWL contains 1,741 word…
Geytenbeek, Joke J M; Vermeulen, R Jeroen; Becher, Jules G; Oostrom, Kim J
2015-03-01
To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic (46%) CP (Gross Motor Function Classification System [GMFCS] levels IV [39%] and V [61%]) underwent spoken language comprehension assessment with the computer-based instrument for low motor language testing (C-BiLLT), a new and validated diagnostic instrument. A multiple linear regression model was used to investigate which variables explained the variation in C-BiLLT scores. Associations between spoken language comprehension abilities (expressed in z-score or age-equivalent score) and motor type of CP, GMFCS and Manual Ability Classification System (MACS) levels, gestational age, and epilepsy were analysed with Fisher's exact test. A p-value <0.05 was considered statistically significant. Chronological age, motor type, and GMFCS classification explained 33% (R=0.577, R(2) =0.33) of the variance in spoken language comprehension. Of the children aged younger than 6 years 6 months, 52.4% of the children with dyskinetic CP attained comprehension scores within the average range (z-score ≥-1.6) as opposed to none of the children with spastic CP. Of the children aged older than 6 years 6 months, 32% of the children with dyskinetic CP reached the highest achievable age-equivalent score compared to 4% of the children with spastic CP. No significant difference in disability was found between CP-related variables (MACS levels, gestational age, epilepsy), with the exception of GMFCS which showed a significant difference in children aged younger than 6 years 6 months (p=0.043). Despite communication disabilities in children with severe CP, particularly in dyskinetic CP, spoken language comprehension may show no or only moderate delay. These findings emphasize the importance of introducing alternative and/or augmentative communication devices from early childhood. © 2014 Mac Keith Press.
Low self-concept in poor readers: prevalence, heterogeneity, and risk.
McArthur, Genevieve; Castles, Anne; Kohnen, Saskia; Banales, Erin
2016-01-01
There is evidence that poor readers are at increased risk for various types of low self-concept-particularly academic self-concept. However, this evidence ignores the heterogeneous nature of poor readers, and hence the likelihood that not all poor readers have low self-concept. The aim of this study was to better understand which types of poor readers have low self-concept. We tested 77 children with poor reading for their age for four types of self-concept, four types of reading, three types of spoken language, and two types of attention. We found that poor readers with poor attention had low academic self-concept, while poor readers with poor spoken language had low general self-concept in addition to low academic self-concept. In contrast, poor readers with typical spoken language and attention did not have low self-concept of any type. We also discovered that academic self-concept was reliably associated with reading and receptive spoken vocabulary, and that general self-concept was reliably associated with spoken vocabulary. These outcomes suggest that poor readers with multiple impairments in reading, language, and attention are at higher risk for low academic and general self-concept, and hence need to be assessed for self-concept in clinical practice. Our results also highlight the need for further investigation into the heterogeneous nature of self-concept in poor readers.
Low self-concept in poor readers: prevalence, heterogeneity, and risk
Castles, Anne; Kohnen, Saskia; Banales, Erin
2016-01-01
There is evidence that poor readers are at increased risk for various types of low self-concept—particularly academic self-concept. However, this evidence ignores the heterogeneous nature of poor readers, and hence the likelihood that not all poor readers have low self-concept. The aim of this study was to better understand which types of poor readers have low self-concept. We tested 77 children with poor reading for their age for four types of self-concept, four types of reading, three types of spoken language, and two types of attention. We found that poor readers with poor attention had low academic self-concept, while poor readers with poor spoken language had low general self-concept in addition to low academic self-concept. In contrast, poor readers with typical spoken language and attention did not have low self-concept of any type. We also discovered that academic self-concept was reliably associated with reading and receptive spoken vocabulary, and that general self-concept was reliably associated with spoken vocabulary. These outcomes suggest that poor readers with multiple impairments in reading, language, and attention are at higher risk for low academic and general self-concept, and hence need to be assessed for self-concept in clinical practice. Our results also highlight the need for further investigation into the heterogeneous nature of self-concept in poor readers. PMID:27867764
ERIC Educational Resources Information Center
Nicholas, Johanna Grant; Geers, Ann E.
2007-01-01
Purpose: The authors examined the benefits of younger cochlear implantation, longer cochlear implant use, and greater pre-implant aided hearing to spoken language at 3.5 and 4.5 years of age. Method: Language samples were obtained at ages 3.5 and 4.5 years from 76 children who received an implant by their 3rd birthday. Hierarchical linear modeling…
Selective auditory attention in adults: effects of rhythmic structure of the competing language.
Reel, Leigh Ann; Hicks, Candace Bourland
2012-02-01
The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Reception thresholds for English sentences were measured for 50 monolingual English-speaking adults in conditions with 2-talker (male-female) competing speech spoken in a stress-based (English, German), syllable-based (Spanish, French), or mora-based (Japanese) language. Two different masking signals were created for each language (i.e., 2 different 2-talker pairs). All subjects were tested in 10 competing conditions (2 conditions for each of the 5 languages). A significant difference was noted between the 2 masking signals within each language. Across languages, significantly greater listening difficulty was observed in conditions where competing speech was spoken in English, German, or Japanese, as compared with Spanish or French. Results suggest that (a) for a particular language, masking effectiveness can vary between different male-female 2-talker maskers and (b) for stress-based vs. syllable-based languages, competing speech is more difficult to ignore when spoken in a language from the native rhythmic class as compared with a nonnative rhythmic class, regardless of whether the language is familiar or unfamiliar to the listener.
Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih
2017-07-19
It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
2016-02-15
The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.
Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech
NASA Astrophysics Data System (ADS)
Furui, Sadaoki
This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.
The cultural and linguistic diversity of 3-year-old children with hearing loss.
Crowe, Kathryn; McLeod, Sharynne; Ching, Teresa Y C
2012-01-01
Understanding the cultural and linguistic diversity of young children with hearing loss informs the provision of assessment, habilitation, and education services to both children and their families. Data describing communication mode, oral language use, and demographic characteristics were collected for 406 children with hearing loss and their caregivers when children were 3 years old. The data were from the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study, a prospective, population-based study of children with hearing loss in Australia. The majority of the 406 children used spoken English at home; however, 28 other languages also were spoken. Compared with their caregivers, the children in this study used fewer spoken languages and had higher rates of oral monolingualism. Few children used a spoken language other than English in their early education environment. One quarter of the children used sign to communicate at home and/or in their early education environment. No associations between caregiver hearing status and children's communication mode were identified. This exploratory investigation of the communication modes and languages used by young children with hearing loss and their caregivers provides an initial examination of the cultural and linguistic diversity and heritage language attrition of this population. The findings of this study have implications for the development of resources and the provision of early education services to the families of children with hearing loss, especially where the caregivers use a language that is not the lingua franca of their country of residence.
Positive Emotional Language in the Final Words Spoken Directly Before Execution
Hirschmüller, Sarah; Egloff, Boris
2016-01-01
How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135
Is Language a Barrier to the Use of Preventive Services?
Woloshin, Steven; Schwartz, Lisa M; Katz, Steven J; Welch, H Gilbert
1997-01-01
OBJECTIVE To isolate the effect of spoken language from financial barriers to care, we examined the relation of language to use of preventive services in a system with universal access. DESIGN Cross-sectional survey. SETTING Household population of women living in Ontario, Canada, in 1990. PARTICIPANTS Subjects were 22,448 women completing the 1990 Ontario Health Survey, a population-based random sample of households. MEASUREMENTS AND MAIN RESULTS We defined language as the language spoken in the home and assessed self-reported receipt of breast examination, mammogram and Pap testing. We used logistic regression to calculate odds ratios for each service adjusting for potential sources of confounding: socioeconomic characteristics, contact with the health care system, and measures reflecting culture. Ten percent of the women spoke a non-English language at home (4% French, 6% other). After adjustment, compared with English speakers, French-speaking women were significantly less likely to receive breast exams or mammography, and other language speakers were less likely to receive Pap testing. CONCLUSIONS Women whose main spoken language was not English were less likely to receive important preventive services. Improving communication with patients with limited English may enhance participation in screening programs. PMID:9276652
Øhre, Beate; Volden, Maj; Falkum, Erik; von Tetzchner, Stephen
2017-01-01
Deaf and hard of hearing (DHH) individuals who use signed language and those who use spoken language face different challenges and stressors. Accordingly, the profile of their mental problems may also differ. However, studies of mental disorders in this population have seldom differentiated between linguistic groups. Our study compares demographics, mental disorders, and levels of distress and functioning in 40 patients using Norwegian Sign Language (NSL) and 36 patients using spoken language. Assessment instruments were translated into NSL. More signers were deaf than hard of hearing, did not share a common language with their childhood caregivers, and had attended schools for DHH children. More Norwegian-speaking than signing patients reported medical comorbidity, whereas the distribution of mental disorders, symptoms of anxiety and depression, and daily functioning did not differ significantly. Somatic complaints and greater perceived social isolation indicate higher stress levels in DHH patients using spoken language than in those using sign language. Therefore, preventive interventions are necessary, as well as larger epidemiological and clinical studies concerning the mental health of all language groups within the DHH population. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Development of brain networks involved in spoken word processing of Mandarin Chinese.
Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R
2011-08-01
Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Van Lancker Sidtis, Diana
2003-01-01
Although interest in the language sciences was previously focused on newly created sentences, more recently much attention has turned to the importance of formulaic expressions in normal and disordered communication. Also referred to as formulaic expressions and made up of speech formulas, idioms, expletives, serial and memorized speech, slang,…
Evaluating the spoken English proficiency of graduates of foreign medical schools.
Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E
2001-08-01
The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.
Delay or deficit? Spelling processes in children with specific language impairment.
Larkin, Rebecca F; Williams, Gareth J; Blaggan, Samarita
2013-01-01
Few studies have explored the phonological, morphological and orthographic spellings skills of children with specific language impairment (SLI) simultaneously. Fifteen children with SLI (mean age=113.07 months, SD=8.61) completed language and spelling tasks alongside chronological-age controls and spelling-age controls. While the children with SLI showed a deficit in phonological spelling, they performed comparably to spelling-age controls on morphological spelling skills, and there were no differences between the three groups in producing orthographically legal spellings. The results also highlighted the potential importance of adequate non-word repetition skills in relation to effective spelling skills, and demonstrated that not all children with spoken language impairments show marked spelling difficulties. Findings are discussed in relation to theory, educational assessment and practice. As a result of this activity, readers will describe components of spoken language that predict children's morphological and phonological spelling performance. As a result of this activity, readers will describe how the spelling skills of children with SLI compare to age-matched and spelling age-matched control children. Readers will be able to interpret the variability in spelling performance seen in children with SLI. Copyright © 2013 Elsevier Inc. All rights reserved.
An Analysis of a Language Test for Employment: The Authenticity of the PhonePass Test
ERIC Educational Resources Information Center
Chun, Christian W.
2006-01-01
This article presents an analysis of Ordinate Corporation's PhonePass Spoken English Test-10. The company promotes this product as being a useful assessment tool for screening job candidates' ability in spoken English. In the real-life domain of the work environment, one of the primary target language use tasks involves extended production…
ERIC Educational Resources Information Center
Crowe, Kathryn; McLeod, Sharynne; McKinnon, David H.; Ching, Teresa Y. C.
2014-01-01
Purpose: The authors sought to investigate the influence of a comprehensive range of factors on the decision making of caregivers of children with hearing loss regarding the use of speech, the use of sign, spoken language multilingualism, and spoken language choice. This is a companion article to the qualitative investigation described in Crowe,…
Why Oracy Must Be in the Curriculum (and Group Work in the Classroom)
ERIC Educational Resources Information Center
Mercer, Neil
2015-01-01
In this article it is argued that the development of young people's skills in using spoken language should be given more time and attention in the school curriculum. The author discusses the importance of the effective use of spoken language in educational and work settings, considers what research has told us about the factors that make group…
ERIC Educational Resources Information Center
Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José
2015-01-01
Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…
Accelerating Receptive Language Acquisition in Kindergarten Students: An Action Research Study
ERIC Educational Resources Information Center
Hewitt, Christine L.
2013-01-01
Receptive language skills allow students to understand the meaning of words spoken to them. When students are unable to comprehend the majority of the words that are spoken to them, they do not have the ability to act on those words, follow given directions, build on prior knowledge, or construct adequate meaning. The inability to understand the…
Grammar Is a System That Characterizes Talk in Interaction
Ginzburg, Jonathan; Poesio, Massimo
2016-01-01
Much of contemporary mainstream formal grammar theory is unable to provide analyses for language as it occurs in actual spoken interaction. Its analyses are developed for a cleaned up version of language which omits the disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language. Using evidence from linguistics, conversation analysis, multimodal communication, psychology, language acquisition, and neuroscience, we show these aspects of language use are rule governed in much the same way as phenomena captured by conventional grammars. Furthermore, we argue that over the past few years some of the tools required to provide a precise characterizations of such phenomena have begun to emerge in theoretical and computational linguistics; hence, there is no reason for treating them as “second class citizens” other than pre-theoretical assumptions about what should fall under the purview of grammar. Finally, we suggest that grammar formalisms covering such phenomena would provide a better foundation not just for linguistic analysis of face-to-face interaction, but also for sister disciplines, such as research on spoken dialogue systems and/or psychological work on language acquisition. PMID:28066279
Effects of Real-Time Cochlear Implant Simulation on Speech Perception and Production
ERIC Educational Resources Information Center
Casserly, Elizabeth D.
2013-01-01
Real-time use of spoken language is a fundamentally interactive process involving speech perception, speech production, linguistic competence, motor control, neurocognitive abilities such as working memory, attention, and executive function, environmental noise, conversational context, and--critically--the communicative interaction between…
Novel Spoken Word Learning in Adults with Developmental Dyslexia
ERIC Educational Resources Information Center
Conner, Peggy S.
2013-01-01
A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…
Yoshinaga-Itano, Christine; Wiggin, Mallene
2016-11-01
Hearing is essential for the development of speech, spoken language, and listening skills. Children previously went undiagnosed with hearing loss until they were 2.5 or 3 years of age. The auditory deprivation during this critical period of development significantly impacted long-term listening and spoken language outcomes. Due to the advent of universal newborn hearing screening, the average age of diagnosis has dropped to the first few months of life, which sets the stage for outcomes that include children with speech, spoken language, and auditory skill testing in the normal range. However, our work is not finished. The future holds even greater possibilities for children with hearing loss. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
ERIC Educational Resources Information Center
Montrul, Silvina; Davidson, Justin; De La Fuente, Israel; Foote, Rebecca
2014-01-01
We examined how age of acquisition in Spanish heritage speakers and L2 learners interacts with implicitness vs. explicitness of tasks in gender processing of canonical and non-canonical ending nouns. Twenty-three Spanish native speakers, 29 heritage speakers, and 33 proficiency-matched L2 learners completed three on-line spoken word recognition…
Win-win: advancing written language knowledge and practice through university clinics.
Katz, Lauren A; Fallon, Karen A
2015-02-01
Speech-language pathologists (SLPs) are uniquely suited for assessing and treating individuals with both spoken and written language disorders. Yet as students move from the elementary grades into the middle and high school grades, SLPs tend to provide fewer direct language services to them. Although spoken language disorders become written language disorders, SLP are not receiving sufficient training in the area of written language, and this is reflected in the extent to which they believe they have the knowledge and skills to provide services to struggling readers and writers on their caseloads. In this article, we discuss these problems and present effective methods for addressing them. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
V2S: Voice to Sign Language Translation System for Malaysian Deaf People
NASA Astrophysics Data System (ADS)
Mean Foong, Oi; Low, Tang Jung; La, Wai Wan
The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.
Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition
Lupyan, Gary
2015-01-01
Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most “arbitrary” spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages. PMID:26340349
Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen
2016-02-01
To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged < 4.5 years, hearing loss of ≥ 30 dB, between 500 and 4000 Hz, using at least one amplification device with adequate amplification (hearing aid, cochlear implant, osseo-integrated device). Behavioral. Caregiver-directed educational intervention curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.
Grammar of Kove: An Austronesian Language of the West New Britain Province, Papua New Guinea
ERIC Educational Resources Information Center
Sato, Hiroko
2013-01-01
This dissertation is a descriptive grammar of Kove, an Austronesian language spoken in the West New Britain Province of Papua New Guinea. Kove is primarily spoken in 18 villages, including some on the small islands north of New Britain. There are about 9,000 people living in the area, but many are not fluent speakers of Kove. The dissertation…
A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition
ERIC Educational Resources Information Center
Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
2015-01-01
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Language-Mediated Visual Orienting Behavior in Low and High Literates
Huettig, Falk; Singh, Niharika; Mishra, Ramesh Kumar
2011-01-01
The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts. PMID:22059083
Misunderstanding and Repair in Tactile Auslan
ERIC Educational Resources Information Center
Willoughby, Louisa; Manns, Howard; Iwasaki, Shimako; Bartlett, Meredith
2014-01-01
This article discusses ways in which misunderstandings arise in Tactile Australian Sign Language (Tactile Auslan) and how they are resolved. Of particular interest are the similarities to and differences from the same processes in visually signed and spoken conversation. This article draws on detailed conversation analysis (CA) and demonstrates…
Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P.; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik
2016-01-01
Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language. PMID:26834665
ERP correlates of motivating voices: quality of motivation and time-course matters
Zougkou, Konstantina; Weinstein, Netta
2017-01-01
Abstract Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. ‘You absolutely have to do it my way’ spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. ‘Why don’t we meet again tomorrow’ spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms. PMID:28525641
ERP correlates of motivating voices: quality of motivation and time-course matters.
Zougkou, Konstantina; Weinstein, Netta; Paulmann, Silke
2017-10-01
Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. 'You absolutely have to do it my way' spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. 'Why don't we meet again tomorrow' spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms. © The Author (2017). Published by Oxford University Press.
Writing Signed Languages: What For? What Form?
Grushkin, Donald A
2017-01-01
Signed languages around the world have tended to maintain an "oral," unwritten status. Despite the advantages of possessing a written form of their language, signed language communities typically resist and reject attempts to create such written forms. The present article addresses many of the arguments against written forms of signed languages, and presents the potential advantages of writing signed languages. Following a history of the development of writing in spoken as well as signed language populations, the effects of orthographic types upon literacy and biliteracy are explored. Attempts at writing signed languages have followed two primary paths: "alphabetic" and "icono-graphic." It is argued that for greatest congruency and ease in developing biliteracy strategies in societies where an alphabetic script is used for the spoken language, signed language communities within these societies are best served by adoption of an alphabetic script for writing their signed language.
Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora
Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.
2012-01-01
Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980
Foreign Languages Sound Fast: Evidence from Implicit Rate Normalization.
Bosker, Hans Rutger; Reinisch, Eva
2017-01-01
Anecdotal evidence suggests that unfamiliar languages sound faster than one's native language. Empirical evidence for this impression has, so far, come from explicit rate judgments. The aim of the present study was to test whether such perceived rate differences between native and foreign languages (FLs) have effects on implicit speech processing. Our measure of implicit rate perception was "normalization for speech rate": an ambiguous vowel between short /a/ and long /a:/ is interpreted as /a:/ following a fast but as /a/ following a slow carrier sentence. That is, listeners did not judge speech rate itself; instead, they categorized ambiguous vowels whose perception was implicitly affected by the rate of the context. We asked whether a bias towards long /a:/ might be observed when the context is not actually faster but simply spoken in a FL. A fully symmetrical experimental design was used: Dutch and German participants listened to rate matched (fast and slow) sentences in both languages spoken by the same bilingual speaker. Sentences were followed by non-words that contained vowels from an /a-a:/ duration continuum. Results from Experiments 1 and 2 showed a consistent effect of rate normalization for both listener groups. Moreover, for German listeners, across the two experiments, foreign sentences triggered more /a:/ responses than (rate matched) native sentences, suggesting that foreign sentences were indeed perceived as faster. Moreover, this FL effect was modulated by participants' ability to understand the FL: those participants that scored higher on a FL translation task showed less of a FL effect. However, opposite effects were found for the Dutch listeners. For them, their native rather than the FL induced more /a:/ responses. Nevertheless, this reversed effect could be reduced when additional spectral properties of the context were controlled for. Experiment 3, using explicit rate judgments, replicated the effect for German but not Dutch listeners. We therefore conclude that the subjective impression that FLs sound fast may have an effect on implicit speech processing, with implications for how language learners perceive spoken segments in a FL.
Verbal redundancy aids memory for filmed entertainment dialogue.
Hinkin, Michael P; Harris, Richard J; Miranda, Andrew T
2014-01-01
Three studies investigated the effects of presentation modality and redundancy of verbal content on recognition memory for entertainment film dialogue. U.S. participants watched two brief movie clips and afterward answered multiple-choice questions about information from the dialogue. Experiment 1 compared recognition memory for spoken dialogue in the native language (English) with subtitles in English, French, or no subtitles. Experiment 2 compared memory for material in English subtitles with spoken dialogue in English, French, or no sound. Experiment 3 examined three control conditions with no spoken or captioned material in the native language. All participants watched the same video clips and answered the same questions. Performance was consistently good whenever English dialogue appeared in either the subtitles or sound, and best of all when it appeared in both, supporting the facilitation of verbal redundancy. Performance was also better when English was only in the subtitles than when it was only spoken. Unexpectedly, sound or subtitles in an unfamiliar language (French) modestly improved performance, as long as there was also a familiar channel. Results extend multimedia research on verbal redundancy for expository material to verbal information in entertainment media.
The interface between spoken and written language: developmental disorders.
Hulme, Charles; Snowling, Margaret J
2014-01-01
We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter-sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills).
The Hidden Meaning of Inner Speech.
ERIC Educational Resources Information Center
Pomper, Marlene M.
This paper is concerned with the inner speech process, its relationship to thought and behavior, and its theoretical and educational implications. The paper first defines inner speech as a bridge between thought and written or spoken language and traces its development. Second, it investigates competing theories surrounding the subject with an…
Simple Logic for Big Problems: An Inside Look at Relational Databases.
ERIC Educational Resources Information Center
Seba, Douglas B.; Smith, Pat
1982-01-01
Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…
Orthography Influences the Perception and Production of Speech
ERIC Educational Resources Information Center
Rastle, Kathleen; McCormick, Samantha F.; Bayliss, Linda; Davis, Colin J.
2011-01-01
One intriguing question in language research concerns the extent to which orthographic information impacts on spoken word processing. Previous research has faced a number of methodological difficulties and has not reached a definitive conclusion. Our research addresses these difficulties by capitalizing on recent developments in the area of word…
ERIC Educational Resources Information Center
Preston, Jonathan L.; Felsenfeld, Susan; Frost, Stephen J.; Mencl, W. Einar; Fulbright, Robert K.; Grigorenko, Elena L.; Landi, Nicole; Seki, Ayumi; Pugh, Kenneth R.
2012-01-01
Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8;6[years;months] through 10;10, with 17 matched controls. Results: When…
Semantic and phonological schema influence spoken word learning and overnight consolidation.
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
2018-06-01
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Schreibman, Laura; Stahmer, Aubyn C
2014-05-01
Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.
Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E
2012-04-01
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.
ERIC Educational Resources Information Center
Maldonado Torres, Sonia Enid
2016-01-01
The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…
ERIC Educational Resources Information Center
De Angelis, Gessica
2014-01-01
The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…
Moats, L C
1994-01-01
Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.
Flores, Glenn; Abreu, Milagros; Tomany-Korman, Sandra C
2005-01-01
Approximately 3.5 million U.S. schoolchildren are limited in English proficiency (LEP). Disparities in children's health and health care are associated with both LEP and speaking a language other than English at home, but prior research has not examined which of these two measures of language barriers is most useful in examining health care disparities. Our objectives were to compare primary language spoken at home vs. parental LEP and their associations with health status, access to care, and use of health services in children. We surveyed parents at urban community sites in Boston, asking 74 questions on children's health status, access to health care, and use of health services. Some 98% of the 1,100 participating children and families were of non-white race/ethnicity, 72% of parents were LEP, and 13 different primary languages were spoken at home. "Dose-response" relationships were observed between parental English proficiency and several child and parental sociodemographic features, including children's insurance coverage, parental educational attainment, citizenship and employment, and family income. Similar "dose-response" relationships were noted between the primary language spoken at home and many but not all of the same sociodemographic features. In multivariate analyses, LEP parents were associated with triple the odds of a child having fair/poor health status, double the odds of the child spending at least one day in bed for illness in the past year, and significantly greater odds of children not being brought in for needed medical care for six of nine access barriers to care. None of these findings were observed in analyses of the primary language spoken at home. Individual parental LEP categories were associated with different risks of adverse health status and outcomes. Parental LEP is superior to the primary language spoken at home as a measure of the impact of language barriers on children's health and health care. Individual parental LEP categories are associated with different risks of adverse outcomes in children's health and health care. Consistent data collection on parental English proficiency and referral of LEP parents to English classes by pediatric providers have the potential to contribute toward reduction and elimination of health care disparities for children of LEP parents.
Attentional Capture of Objects Referred to by Spoken Language
ERIC Educational Resources Information Center
Salverda, Anne Pier; Altmann, Gerry T. M.
2011-01-01
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
ERIC Educational Resources Information Center
Schwarz, Amy Louise; Guajardo, Jennifer; Hart, Rebecca
2017-01-01
Deaf and hard-of-hearing (DHH) literature suggests that there are different read-aloud goals for DHH prereaders based on the spoken and visual communication modes DHH prereaders use, such as: American Sign Language (ASL), simultaneously signed and spoken English (SimCom), and predominately spoken English only. To date, no studies have surveyed…
Neural correlates of pragmatic language comprehension in autism spectrum disorders.
Tesink, C M J Y; Buitelaar, J K; Petersson, K M; van der Gaag, R J; Kan, C C; Tendolkar, I; Hagoort, P
2009-07-01
Difficulties with pragmatic aspects of communication are universal across individuals with autism spectrum disorders (ASDs). Here we focused on an aspect of pragmatic language comprehension that is relevant to social interaction in daily life: the integration of speaker characteristics inferred from the voice with the content of a message. Using functional magnetic resonance imaging (fMRI), we examined the neural correlates of the integration of voice-based inferences about the speaker's age, gender or social background, and sentence content in adults with ASD and matched control participants. Relative to the control group, the ASD group showed increased activation in right inferior frontal gyrus (RIFG; Brodmann area 47) for speaker-incongruent sentences compared to speaker-congruent sentences. Given that both groups performed behaviourally at a similar level on a debriefing interview outside the scanner, the increased activation in RIFG for the ASD group was interpreted as being compensatory in nature. It presumably reflects spill-over processing from the language dominant left hemisphere due to higher task demands faced by the participants with ASD when integrating speaker characteristics and the content of a spoken sentence. Furthermore, only the control group showed decreased activation for speaker-incongruent relative to speaker-congruent sentences in right ventral medial prefrontal cortex (vMPFC; Brodmann area 10), including right anterior cingulate cortex (ACC; Brodmann area 24/32). Since vMPFC is involved in self-referential processing related to judgments and inferences about self and others, the absence of such a modulation in vMPFC activation in the ASD group possibly points to atypical default self-referential mental activity in ASD. Our results show that in ASD compensatory mechanisms are necessary in implicit, low-level inferential processes in spoken language understanding. This indicates that pragmatic language problems in ASD are not restricted to high-level inferential processes, but encompass the most basic aspects of pragmatic language processing.
Huettig, Falk; Altmann, Gerry T M
2005-05-01
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; ]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word 'piano' when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word 'piano' unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.
Campbell, Ruth; MacSweeney, Mairéad; Woll, Bencie
2014-01-01
Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections-including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant-however that may be achieved, and whatever the success of auditory restoration.
Campbell, Ruth; MacSweeney, Mairéad; Woll, Bencie
2014-01-01
Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections—including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant—however that may be achieved, and whatever the success of auditory restoration. PMID:25368567
Werfel, Krystal L
2017-10-05
The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.
Language and Culture in the Multi-Ethnic Community: Spoken-Language Assessment
ERIC Educational Resources Information Center
Matluck, Joseph H.; Mace-Matluck, Betty J.
1975-01-01
Describes the research approach used to develop the MAT-SEA-CAL Oral Proficiency tests designed by the authors. Language test performance depends on both language proficiency and knowledge of the culture. (TL)
Rämä, Pia; Sirri, Louah; Serres, Josette
2013-04-01
Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.
Experiments on Urdu Text Recognition
NASA Astrophysics Data System (ADS)
Mukhtar, Omar; Setlur, Srirangaraj; Govindaraju, Venu
Urdu is a language spoken in the Indian subcontinent by an estimated 130-270 million speakers. At the spoken level, Urdu and Hindi are considered dialects of a single language because of shared vocabulary and the similarity in grammar. At the written level, however, Urdu is much closer to Arabic because it is written in Nastaliq, the calligraphic style of the Persian-Arabic script. Therefore, a speaker of Hindi can understand spoken Urdu but may not be able to read written Urdu because Hindi is written in Devanagari script, whereas an Arabic writer can read the written words but may not understand the spoken Urdu. In this chapter we present an overview of written Urdu. Prior research in handwritten Urdu OCR is very limited. We present (perhaps) the first system for recognizing handwritten Urdu words. On a data set of about 1300 handwritten words, we achieved an accuracy of 70% for the top choice, and 82% for the top three choices.
Shuai, Lan; Malins, Jeffrey G
2017-02-01
Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.
SyllabO+: A new tool to study sublexical phenomena in spoken Quebec French.
Bédard, Pascale; Audet, Anne-Marie; Drouin, Patrick; Roy, Johanna-Pascale; Rivard, Julie; Tremblay, Pascale
2017-10-01
Sublexical phonotactic regularities in language have a major impact on language development, as well as on speech processing and production throughout the entire lifespan. To understand the impact of phonotactic regularities on speech and language functions at the behavioral and neural levels, it is essential to have access to oral language corpora to study these complex phenomena in different languages. Yet, probably because of their complexity, oral language corpora remain less common than written language corpora. This article presents the first corpus and database of spoken Quebec French syllables and phones: SyllabO+. This corpus contains phonetic transcriptions of over 300,000 syllables (over 690,000 phones) extracted from recordings of 184 healthy adult native Quebec French speakers, ranging in age from 20 to 97 years. To ensure the representativeness of the corpus, these recordings were made in both formal and familiar communication contexts. Phonotactic distributional statistics (e.g., syllable and co-occurrence frequencies, percentages, percentile ranks, transition probabilities, and pointwise mutual information) were computed from the corpus. An open-access online application to search the database was developed, and is available at www.speechneurolab.ca/syllabo . In this article, we present a brief overview of the corpus, as well as the syllable and phone databases, and we discuss their practical applications in various fields of research, including cognitive neuroscience, psycholinguistics, neurolinguistics, experimental psychology, phonetics, and phonology. Nonacademic practical applications are also discussed, including uses in speech-language pathology.
Language planning for the 21st century: revisiting bilingual language policy for deaf children.
Knoors, Harry; Marschark, Marc
2012-01-01
For over 25 years in some countries and more recently in others, bilingual education involving sign language and the written/spoken vernacular has been considered an essential educational intervention for deaf children. With the recent growth in universal newborn hearing screening and technological advances such as digital hearing aids and cochlear implants, however, more deaf children than ever before have the potential for acquiring spoken language. As a result, the question arises as to the role of sign language and bilingual education for deaf children, particularly those who are very young. On the basis of recent research and fully recognizing the historical sensitivity of this issue, we suggest that language planning and language policy should be revisited in an effort to ensure that they are appropriate for the increasingly diverse population of deaf children.
Willis, Suzi; Goldbart, Juliet; Stansfield, Jois
2014-07-01
To compare verbal short-term memory and visual working memory abilities of six children with congenital hearing-impairment identified as having significant language learning difficulties with normative data from typically hearing children using standardized memory assessments. Six children with hearing loss aged 8-15 years were assessed on measures of verbal short-term memory (Non-word and word recall) and visual working memory annually over a two year period. All children had cognitive abilities within normal limits and used spoken language as the primary mode of communication. The language assessment scores at the beginning of the study revealed that all six participants exhibited delays of two years or more on standardized assessments of receptive and expressive vocabulary and spoken language. The children with hearing-impairment scores were significantly higher on the non-word recall task than the "real" word recall task. They also exhibited significantly higher scores on visual working memory than those of the age-matched sample from the standardized memory assessment. Each of the six participants in this study displayed the same pattern of strengths and weaknesses in verbal short-term memory and visual working memory despite their very different chronological ages. The children's poor ability to recall single syllable words in relation to non-words is a clinical indicator of their difficulties in verbal short-term memory. However, the children with hearing-impairment do not display generalized processing difficulties and indeed demonstrate strengths in visual working memory. The poor ability to recall words, in combination with difficulties with early word learning may be indicators of children with hearing-impairment who will struggle to develop spoken language equal to that of their normally hearing peers. This early identification has the potential to allow for target specific intervention that may remediate their difficulties. Copyright © 2014. Published by Elsevier Ireland Ltd.
Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping
2017-01-01
Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading. PMID:28690507
Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping
2017-01-01
Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.
Almeida, Diogo; Poeppel, David; Corina, David
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
Online Lexical Competition during Spoken Word Recognition and Word Learning in Children and Adults
ERIC Educational Resources Information Center
Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth
2013-01-01
Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children…
ERIC Educational Resources Information Center
Nihei, Koichi
This paper discusses how to teach listening so that English-as-a-Second-Language students can develop a level of listening ability that is useful in the real world, not just in the classroom. It asserts that if teachers know the processes involved in listening comprehension and some features of spoken English, they can provide students with…
Individual Differences in Premotor and Motor Recruitment during Speech Perception
ERIC Educational Resources Information Center
Szenkovits, Gayaneh; Peelle, Jonathan E.; Norris, Dennis; Davis, Matthew H.
2012-01-01
Although activity in premotor and motor cortices is commonly observed in neuroimaging studies of spoken language processing, the degree to which this activity is an obligatory part of everyday speech comprehension remains unclear. We hypothesised that rather than being a unitary phenomenon, the neural response to speech perception in motor regions…
Orthography and Modality Influence Speech Production in Adults and Children
ERIC Educational Resources Information Center
Saletta, Meredith; Goffman, Lisa; Hogan, Tiffany P.
2016-01-01
Purpose: The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Method: Children (n = 17), adults with typical reading skills (n = 17), and adults…
AT Advocates Tackle Attitudes & Education towards Learning Disabilities
ERIC Educational Resources Information Center
Williams, John M.
2006-01-01
Learning disabilities are present in 10 percent of the population, and the condition is defined as, "A disorder in basic psychological processes involved in understanding or using language, spoken or written, that may manifest itself in an imperfect ability to listen, think, speak, read, write, spell or use mathematical calculations". In this…
ERIC Educational Resources Information Center
Witt, Autumn Song
2010-01-01
This dissertation follows an oral language assessment tool from initial design and implementation to validity analysis. The specialized variables of this study are the population: international teaching assistants and the purpose: spoken assessment as a hiring prerequisite. However, the process can easily be applied to other populations and…
Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech
ERIC Educational Resources Information Center
Borrie, Stephanie A.; Lansford, Kaitlin L.; Barrett, Tyson S.
2017-01-01
Purpose: The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception…
Research and Policy Considerations for English Learner Equity
ERIC Educational Resources Information Center
Robinson-Cimpian, Joseph P.; Thompson, Karen D.; Umansky, Ilana M.
2016-01-01
English learners (ELs), students from a home where a language other than English is spoken and who are in the process of developing English proficiency themselves, represent over 10% of the US student population. Oftentimes education policies and practices create barriers for ELs to achieve access and outcomes that are equitable to those of their…
Modeling the Control of Phonological Encoding in Bilingual Speakers
ERIC Educational Resources Information Center
Roelofs, Ardi; Verhoef, Kim
2006-01-01
Phonological encoding is the process by which speakers retrieve phonemic segments for morphemes from memory and use the segments to assemble phonological representations of words to be spoken. When conversing in one language, bilingual speakers have to resist the temptation of encoding word forms using the phonological rules and representations of…
Brain Bases of Morphological Processing in Chinese-English Bilingual Children
ERIC Educational Resources Information Center
Ip, Ka I; Hsu, Lucy Shih-Ju; Arredondo, Maria M.; Tardif, Twila; Kovelman, Ioulia
2017-01-01
Can bilingual exposure impact children's neural circuitry for learning to read? To answer this question, we investigated the brain bases of morphological awareness, one of the key spoken language abilities for learning to read in English and Chinese. Bilingual Chinese-English and monolingual English children (N = 22, ages 7-12) completed…
Research on oral test modeling based on multi-feature fusion
NASA Astrophysics Data System (ADS)
Shi, Yuliang; Tao, Yiyue; Lei, Jun
2018-04-01
In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.
Speech perception in older adults: the importance of speech-specific cognitive abilities.
Sommers, M S
1997-05-01
To provide a critical evaluation of studies examining the contribution of changes in language-specific cognitive abilities to the speech perception difficulties of older adults. A review of the literature on aging and speech perception. The research considered in the present review suggests that age-related changes in absolute sensitivity is the principal factor affecting older listeners' speech perception in quiet. However, under less favorable listening conditions, changes in a number of speech-specific cognitive abilities can also affect spoken language processing in older people. Clinically, these findings suggest that hearing aids, which have been the traditional treatment for improving speech perception in older adults, are likely to offer considerable benefit in quiet listening situations because the amplification they provide can serve to compensate for age-related hearing losses. However, such devices may be less beneficial in more natural environments, (e.g., noisy backgrounds, multiple talkers, reverberant rooms) because they are less effective for improving speech perception difficulties that result from age-related cognitive declines. It is suggested that an integrative approach to designing test batteries that can assess both sensory and cognitive abilities needed for processing spoken language offers the most promising approach for developing therapeutic interventions to improve speech perception in older adults.
Unit 802: Language Varies with Approach.
ERIC Educational Resources Information Center
Minnesota Univ., Minneapolis. Center for Curriculum Development in English.
This eighth-grade language unit stresses developing the student's sensitivity to variations in language, primarily the similarities and differences between spoken and written language. Through sample lectures and discussion questions, the students are helped to form generalizations about language: that speech is the primary form of language; that…
Situated sentence processing: the coordinated interplay account and a neurobehavioral model.
Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R
2010-03-01
Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world. Copyright 2009 Elsevier Inc. All rights reserved.
Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.
2011-01-01
The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319
Who's on First? Investigating the referential hierarchy in simple native ASL narratives.
Frederiksen, Anne Therese; Mayberry, Rachel I
2016-09-01
Discussions of reference tracking in spoken languages often invoke some version of a referential hierarchy. In this paper, we asked whether this hierarchy applies equally well to reference tracking in a visual language, American Sign Language, or whether modality differences influence its structure. Expanding the results of previous studies, this study looked at ASL referential devices beyond nouns, pronouns, and zero anaphora. We elicited four simple narratives from eight native ASL signers, and examined how the signers tracked reference throughout their stories. We found that ASL signers follow general principles of the referential hierarchy proposed for spoken languages by using nouns for referent introductions, and zero anaphora for referent maintenance. However, we also found significant differences such as the absence of pronouns in the narratives, despite their existence in ASL, and differential use of verbal and constructed action zero anaphora. Moreover, we found that native signers' use of classifiers varied with discourse status in a way that deviated from our expectations derived from the referential hierarchy for spoken languages. On this basis, we propose a tentative hierarchy of referential expressions for ASL that incorporates modality specific referential devices.
Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.
Brimo, Danielle; Lund, Emily; Sapp, Alysha
2018-05-01
Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below-average reading comprehension, but the syntax construct, awareness or knowledge, did. Thus, when selecting how to measure syntax among school-age children, researchers and practitioners should evaluate whether they are measuring children's awareness of spoken syntax or knowledge of spoken syntax. Other differences, such as participant diagnosis and the format of items on the spoken-syntax assessments, also were discussed as possible explanations for why researchers found that children with average and below-average reading comprehension did not score significantly differently on spoken-syntax assessments. © 2017 Royal College of Speech and Language Therapists.
Real-time lexical comprehension in young children learning American Sign Language.
MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne
2018-04-16
When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.
Gesture production and comprehension in children with specific language impairment.
Botting, Nicola; Riches, Nicholas; Gaynor, Marguerite; Morgan, Gary
2010-03-01
Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups.
Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
2017-01-01
Abstract Background Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language delays. Aims We compared deaf and hearing children's performance on a semantic fluency task. Optimal performance on this task requires a systematic search of the mental lexicon, the retrieval of words within a subcategory and, when that subcategory is exhausted, switching to a new subcategory. We compared retrieval patterns between groups, and also compared the responses of deaf children who used British Sign Language (BSL) with those who used spoken English. We investigated how semantic fluency performance related to children's expressive vocabulary and executive function skills, and also retested semantic fluency in the majority of the children nearly 2 years later, in order to investigate how much progress they had made in that time. Methods & Procedures Participants were deaf children aged 6–11 years (N = 106, comprising 69 users of spoken English, 29 users of BSL and eight users of Sign Supported English—SSE) compared with hearing children (N = 120) of the same age who used spoken English. Semantic fluency was tested for the category ‘animals’. We coded for errors, clusters (e.g., ‘pets’, ‘farm animals’) and switches. Participants also completed the Expressive One‐Word Picture Vocabulary Test and a battery of six non‐verbal executive function tasks. In addition, we collected follow‐up semantic fluency data for 70 deaf and 74 hearing children, nearly 2 years after they were first tested. Outcomes & Results Deaf children, whether using spoken or signed language, produced fewer items in the semantic fluency task than hearing children, but they showed similar patterns of responses for items most commonly produced, clustering of items into subcategories and switching between subcategories. Both vocabulary and executive function scores predicted the number of correct items produced. Follow‐up data from deaf participants showed continuing delays relative to hearing children 2 years later. Conclusions & Implications We conclude that semantic fluency can be used experimentally to investigate lexical organization in deaf children, and that it potentially has clinical utility across the heterogeneous deaf population. We present normative data to aid clinicians who wish to use this task with deaf children. PMID:28691260
Le langage des gestes (Body Language).
ERIC Educational Resources Information Center
Brunet, Jean-Paul
1985-01-01
Body language is inseparable from spoken language, and may reflect universal behavior or be culture-specific. Photographs and videotape recordings can help the French instructor illustrate the richness of facial and body mannerisms. (MSE)
Same Talker, Different Language: A Replication.
ERIC Educational Resources Information Center
Stockmal, Verna; Bond, Z. S.
This research investigated judgments of language samples produced by bilingual speakers. In the first study, listeners judged whether two language samples produced by bilingual speakers were spoken in the same language or in two different languages. Four bilingual African talkers recorded short passages in Swahili and in their home language (Akan,…
Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.
Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung
2016-01-01
Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.
Spectrotemporal processing drives fast access to memory traces for spoken words.
Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C
2012-05-01
The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.
Nayak, Satheesha B; Awal, Mahfuzah Binti; Han, Chang Wei; Sivaram, Ganeshram; Vigneswaran, Thimesha; Choon, Tee Lian
2016-01-01
Introduction Tongue is mainly used for taste, chewing and in speech. In the present study, we focused on the secondary function of the tongue as to how it is used in phonetic pronunciation and linguistics and how these factors affect tongue movements. Objective To compare all possible movements of tongue among Malaysians belonging to three ethnic races and to find out if there is any link between languages spoken and ability to perform various tongue movements. Materials and Methods A total of 450 undergraduate medical students participated in the study. The students were chosen from three different races i.e. Malays, Chinese and Indians (Malaysian Indians). Data was collected from the students through a semi-structured interview following which each student was asked to demonstrate various tongue movements like protrusion, retraction, flattening, rolling, twisting, folding or any other special movements. The data obtained was first segregated and analysed according to gender, race and types and dialects of languages spoken. Results We found that most of the Malaysians were able to perform the basic movements of tongue like protrusion, flattening movements and very few were able to perform twisting and folding of the tongue. The ability to perform normal tongue movements and special movements like folding, twisting, rolling and others was higher among Indians when compared to Malay and Chinese. Conclusion Languages spoken by Indians involve detailed tongue rolling and folding in pronouncing certain words and may be the reason as to why Indians are more versatile with tongue movements as compared to the other two races amongst Malaysians. It may be a possibility that languages spoken by a person serves as a variable that increases their ability to perform special tongue movements besides influenced by the genetic makeup of a person. PMID:26894051
Li, X; Yang, Y; Ren, G
2009-06-16
Language is often perceived together with visual information. Recent experimental evidences indicated that, during spoken language comprehension, the brain can immediately integrate visual information with semantic or syntactic information from speech. Here we used the mismatch negativity to further investigate whether prosodic information from speech could be immediately integrated into a visual scene context or not, and especially the time course and automaticity of this integration process. Sixteen Chinese native speakers participated in the study. The materials included Chinese spoken sentences and picture pairs. In the audiovisual situation, relative to the concomitant pictures, the spoken sentence was appropriately accented in the standard stimuli, but inappropriately accented in the two kinds of deviant stimuli. In the purely auditory situation, the speech sentences were presented without pictures. It was found that the deviants evoked mismatch responses in both audiovisual and purely auditory situations; the mismatch negativity in the purely auditory situation peaked at the same time as, but was weaker than that evoked by the same deviant speech sounds in the audiovisual situation. This pattern of results suggested immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention.
Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih
2017-06-05
The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.
Analytic study of the Tadoma method: language abilities of three deaf-blind subjects.
Chomsky, C
1986-09-01
This study reports on the linguistic abilities of 3 adult deaf-blind subjects. The subjects perceive spoken language through touch, placing a hand on the face of the speaker and monitoring the speaker's articulatory motions, a method of speechreading known as Tadoma. Two of the subjects, deaf-blind since infancy, acquired language and learned to speak through this tactile system; the third subject has used Tadoma since becoming deaf-blind at age 7. Linguistic knowledge and productive language are analyzed, using standardized tests and several tests constructed for this study. The subjects' language abilities prove to be extensive, comparing favorably in many areas with hearing individuals. The results illustrate a relatively minor effect of limited language exposure on eventual language achievement. The results also demonstrate the adequacy of the tactile sense, in these highly trained Tadoma users, for transmitting information about spoken language sufficient to support the development of language and learning to produce speech.
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
2016-02-01
The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Using music to study the evolution of cognitive mechanisms relevant to language.
Patel, Aniruddh D
2017-02-01
This article argues that music can be used in cross-species research to study the evolution of cognitive mechanisms relevant to spoken language. This is because music and language share certain cognitive processing mechanisms and because music offers specific advantages for cross-species research. Music has relatively simple building blocks (tones without semantic properties), yet these building blocks are combined into rich hierarchical structures that engage complex cognitive processing. I illustrate this point with regard to the processing of musical harmonic structure. Because the processing of musical harmonic structure has been shown to interact with linguistic syntactic processing in humans, it is of interest to know if other species can acquire implicit knowledge of harmonic structure through extended exposure to music during development (vs. through explicit training). I suggest that domestic dogs would be a good species to study in addressing this question.
ERIC Educational Resources Information Center
Capek, Cheryl M.; Woll, Bencie; MacSweeney, Mairead; Waters, Dafydd; McGuire, Philip K.; David, Anthony S.; Brammer, Michael J.; Campbell, Ruth
2010-01-01
Studies of spoken and signed language processing reliably show involvement of the posterior superior temporal cortex. This region is also reliably activated by observation of meaningless oral and manual actions. In this study we directly compared the extent to which activation in posterior superior temporal cortex is modulated by linguistic…
ERIC Educational Resources Information Center
Hwang, Hyekyung; Steinhauer, Karsten
2011-01-01
In spoken language comprehension, syntactic parsing decisions interact with prosodic phrasing, which is directly affected by phrase length. Here we used ERPs to examine whether a similar effect holds for the on-line processing of written sentences during silent reading, as suggested by theories of "implicit prosody." Ambiguous Korean sentence…
ERIC Educational Resources Information Center
Ng, Shukhan; Payne, Brennan R.; Stine-Morrow, Elizabeth A. L.; Federmeier, Kara D.
2018-01-01
We investigated how struggling adult readers make use of sentence context to facilitate word processing when comprehending spoken language, conditions under which print decoding is not a barrier to comprehension. Stimuli were strongly and weakly constraining sentences (as measured by cloze probability), which ended with the most expected word…
Preserving Musicality through Pictures: A Linguistic Pathway to Conventional Notation
ERIC Educational Resources Information Center
Nordquist, Alice L.
2016-01-01
The natural musicality so often present in children's singing can begin to fade as the focus of a lesson shifts to the process of reading and writing conventional notation symbols. Approaching the study of music from a linguistic perspective preserves the pace and flow that is inherent in spoken language and song. SongWorks teaching practices…
Hunter Adams, Jo; Penrose, Katherine L.; Cochran, Jennifer; Rybin, Denis; Doros, Gheorghe; Henshaw, Michelle; Paasche-Orlow, Michael
2013-01-01
Background This study investigated the impact of English health literacy and spoken proficiency and acculturation on preventive dental care use among Somali refugees in Massachusetts. Methods 439 adult Somalis in the U.S. ≤ 10 years ago were interviewed. English functional health literacy, dental word recognition, and spoken proficiency were measured using STOFHLA, REALD, and BEST Plus. Logistic regression tested associations of language measures with preventive dental care use. Results Without controlling for acculturation, participants with higher health literacy were 2.0 times more likely to have had preventive care (p=0.02). Subjects with higher word recognition were 1.8 times as likely to have had preventive care (p=0.04). Controlling for acculturation, these were no longer significant, and spoken proficiency was not associated with increased preventive care use. Discussion English health literacy and spoken proficiency were not associated with preventive dental care. Other factors, like acculturation, were more predictive of care use than language skills. PMID:23748902
Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials.
Freunberger, Dominik; Nieuwland, Mante S
2016-09-01
Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences ("Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day"). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
When semantics aids phonology: A processing advantage for iconic word forms in aphasia.
Meteyard, Lotte; Stoppard, Emily; Snudden, Dee; Cappa, Stefano F; Vigliocco, Gabriella
2015-09-01
Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. "moo", "splash"). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage is due to a stronger connection between semantic information and phonological forms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Frisch, Stefan A.; Pisoni, David B.
2012-01-01
Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784
Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success
Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.
2013-01-01
The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625
Saving a Language with Computers, Tape Recorders, and Radio.
ERIC Educational Resources Information Center
Bennett, Ruth
This paper discusses the use of technology in instruction. It begins by examining research on technology and indigenous languages, focusing on the use of technology to get community attention for an indigenous language, improve the quantity of quality language, document spoken language, create sociocultural learning contexts, improve study skills,…
Signs of Change: Contemporary Attitudes to Australian Sign Language
ERIC Educational Resources Information Center
Slegers, Claudia
2010-01-01
This study explores contemporary attitudes to Australian Sign Language (Auslan). Since at least the 1960s, sign languages have been accepted by linguists as natural languages with all of the key ingredients common to spoken languages. However, these visual-spatial languages have historically been subject to ignorance and myth in Australia and…
Micro Language Planning and Cultural Renaissance in Botswana
ERIC Educational Resources Information Center
Alimi, Modupe M.
2016-01-01
Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…
Corina, David P; Knapp, Heather Patterson
2008-12-01
In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.
Crume, Peter K
2013-10-01
The National Reading Panel emphasizes that spoken language phonological awareness (PA) developed at home and school can lead to improvements in reading performance in young children. However, research indicates that many deaf children are good readers even though they have limited spoken language PA. Is it possible that some deaf students benefit from teachers who promote sign language PA instead? The purpose of this qualitative study is to examine teachers' beliefs and instructional practices related to sign language PA. A thematic analysis is conducted on 10 participant interviews at an ASL/English bilingual school for the deaf to understand their views and instructional practices. The findings reveal that the participants had strong beliefs in developing students' structural knowledge of signs and used a variety of instructional strategies to build students' knowledge of sign structures in order to promote their language and literacy skills.
Semiotic diversity in utterance production and the concept of ‘language’
Kendon, Adam
2014-01-01
Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics. PMID:25092661
Phonological awareness: explicit instruction for young deaf and hard-of-hearing children.
Miller, Elizabeth M; Lederberg, Amy R; Easterbrooks, Susan R
2013-04-01
The goal of this study was to explore the development of spoken phonological awareness for deaf and hard-of-hearing children (DHH) with functional hearing (i.e., the ability to access spoken language through hearing). Teachers explicitly taught five preschoolers the phonological awareness skills of syllable segmentation, initial phoneme isolation, and rhyme discrimination in the context of a multifaceted emergent literacy intervention. Instruction occurred in settings where teachers used simultaneous communication or spoken language only. A multiple-baseline across skills design documented a functional relation between instruction and skill acquisition for those children who did not have the skills at baseline with one exception; one child did not meet criteria for syllable segmentation. These results were confirmed by changes on phonological awareness tests that were administered at the beginning and end of the school year. We found that DHH children who varied in primary communication mode, chronological age, and language ability all benefited from explicit instruction in phonological awareness.
Corina, David P.; Lawyer, Laurel A.; Cates, Deborah
2013-01-01
Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language. PMID:23293624
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G
2016-12-01
Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Cejas, Ivette; Mitchell, Christine M; Hoffman, Michael; Quittner, Alexandra L
2018-04-05
To make longitudinal comparisons of intelligence quotient (IQ) in children with cochlear implants (CIs) and typical hearing peers from early in development to the school-age period. Children with additional comorbidities and CIs were also evaluated. To estimate the impact of socioeconomic status and oral language on school-age cognitive performance. This longitudinal study evaluated nonverbal IQ in a multicenter, national sample of 147 children with CIs and 75 typically hearing peers. IQ was evaluated at baseline, prior to cochlear implantation, using the Bayley Scales of Infant and Toddler Development and the Leiter International Performance Scale. School-age IQ was assessed using the Wechsler Intelligence Scales for Children. For the current study, only the Perceptual Reasoning and Processing Speed indices were administered. Oral language was evaluated using the Comprehensive Assessment of Spoken Language. Children in the CI group scored within the normal range of intelligence at both time points. However, children with additional comorbidities scored significantly worse on the Processing Speed, but not the Perceptual Reasoning Index. Maternal education and language were significantly related to school-age IQ in both groups. Importantly, language was the strongest predictor of intellectual functioning in both children with CIs and normal hearing. These results suggest that children using cochlear implants perform similarly to hearing peers on measures of intelligence, but those with severe comorbidities are at-risk for cognitive deficits. Despite the strong link between socioeconomic status and intelligence, this association was no longer significant once spoken language performance was accounted for. These results reveal the important contributions that early intervention programs, which emphasize language and parent training, contribute to cognitive functioning in school-age children with CIs. For families from economically disadvantaged backgrounds, who are at-risk for suboptimal outcomes, these early intervention programs are critical to improve overall functioning.
Enduring Advantages of Early Cochlear Implantation for Spoken Language Development
Geers, Ann E.; Nicholas, Johanna G.
2013-01-01
Purpose To determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12–38 months of age. Relative advantages of receiving a bilateral CI after age 4.5, better pre-CI aided hearing, and longer CI experience were also examined. Method Sixty children participated in a prospective longitudinal study of outcomes at 4.5 and 10.5 years of age. Twenty-nine children received a sequential second CI. Test scores were compared to normative samples of hearing age-mates and predictors of outcomes identified. Results Standard scores on language tests at 10.5 years of age remained significantly correlated with age of first cochlear implantation. Scores were not associated with receipt of a second, sequentially-acquired CI. Significantly higher scores were achieved for vocabulary as compared with overall language, a finding not evident when the children were tested at younger ages. Conclusion Age-appropriate spoken language skills continued to be more likely with younger AOI, even after an average of 8.6 years of additional CI use. Receipt of a second implant between ages 4–10 years and longer duration of device use did not provide significant added benefit. PMID:23275406
Liebenthal, Einat; Silbersweig, David A.; Stern, Emily
2016-01-01
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala—a subcortical center for emotion perception—are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states. PMID:27877106
Liebenthal, Einat; Silbersweig, David A; Stern, Emily
2016-01-01
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
Hope, Thomas M H; Leff, Alex P; Prejawa, Susan; Bruce, Rachel; Haigh, Zula; Lim, Louise; Ramsden, Sue; Oberhuber, Marion; Ludersdorfer, Philipp; Crinion, Jenny; Seghier, Mohamed L; Price, Cathy J
2017-06-01
Stroke survivors with acquired language deficits are commonly thought to reach a 'plateau' within a year of stroke onset, after which their residual language skills will remain stable. Nevertheless, there have been reports of patients who appear to recover over years. Here, we analysed longitudinal change in 28 left-hemisphere stroke patients, each more than a year post-stroke when first assessed-testing each patient's spoken object naming skills and acquiring structural brain scans twice. Some of the patients appeared to improve over time while others declined; both directions of change were associated with, and predictable given, structural adaptation in the intact right hemisphere of the brain. Contrary to the prevailing view that these patients' language skills are stable, these results imply that real change continues over years. The strongest brain-behaviour associations (the 'peak clusters') were in the anterior temporal lobe and the precentral gyrus. Using functional magnetic resonance imaging, we confirmed that both regions are actively involved when neurologically normal control subjects name visually presented objects, but neither appeared to be involved when the same participants used a finger press to make semantic association decisions on the same stimuli. This suggests that these regions serve word-retrieval or articulatory functions in the undamaged brain. We teased these interpretations apart by reference to change in other tasks. Consistent with the claim that the real change is occurring here, change in spoken object naming was correlated with change in two other similar tasks, spoken action naming and written object naming, each of which was independently associated with structural adaptation in similar (overlapping) right hemisphere regions. Change in written object naming, which requires word-retrieval but not articulation, was also significantly more correlated with both (i) change in spoken object naming; and (ii) structural adaptation in the two peak clusters, than was change in another task-auditory word repetition-which requires articulation but not word retrieval. This suggests that the changes in spoken object naming reflected variation at the level of word-retrieval processes. Surprisingly, given their qualitatively similar activation profiles, hypertrophy in the anterior temporal region was associated with improving behaviour, while hypertrophy in the precentral gyrus was associated with declining behaviour. We predict that either or both of these regions might be fruitful targets for neural stimulation studies (suppressing the precentral region and/or enhancing the anterior temporal region), aiming to encourage recovery or arrest decline even years after stroke occurs. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain.
Leff, Alex P.; Prejawa, Susan; Bruce, Rachel; Haigh, Zula; Lim, Louise; Ramsden, Sue; Oberhuber, Marion; Ludersdorfer, Philipp; Crinion, Jenny; Seghier, Mohamed L.; Price, Cathy J.
2017-01-01
Abstract Stroke survivors with acquired language deficits are commonly thought to reach a ‘plateau’ within a year of stroke onset, after which their residual language skills will remain stable. Nevertheless, there have been reports of patients who appear to recover over years. Here, we analysed longitudinal change in 28 left-hemisphere stroke patients, each more than a year post-stroke when first assessed—testing each patient’s spoken object naming skills and acquiring structural brain scans twice. Some of the patients appeared to improve over time while others declined; both directions of change were associated with, and predictable given, structural adaptation in the intact right hemisphere of the brain. Contrary to the prevailing view that these patients’ language skills are stable, these results imply that real change continues over years. The strongest brain–behaviour associations (the ‘peak clusters’) were in the anterior temporal lobe and the precentral gyrus. Using functional magnetic resonance imaging, we confirmed that both regions are actively involved when neurologically normal control subjects name visually presented objects, but neither appeared to be involved when the same participants used a finger press to make semantic association decisions on the same stimuli. This suggests that these regions serve word-retrieval or articulatory functions in the undamaged brain. We teased these interpretations apart by reference to change in other tasks. Consistent with the claim that the real change is occurring here, change in spoken object naming was correlated with change in two other similar tasks, spoken action naming and written object naming, each of which was independently associated with structural adaptation in similar (overlapping) right hemisphere regions. Change in written object naming, which requires word-retrieval but not articulation, was also significantly more correlated with both (i) change in spoken object naming; and (ii) structural adaptation in the two peak clusters, than was change in another task—auditory word repetition—which requires articulation but not word retrieval. This suggests that the changes in spoken object naming reflected variation at the level of word-retrieval processes. Surprisingly, given their qualitatively similar activation profiles, hypertrophy in the anterior temporal region was associated with improving behaviour, while hypertrophy in the precentral gyrus was associated with declining behaviour. We predict that either or both of these regions might be fruitful targets for neural stimulation studies (suppressing the precentral region and/or enhancing the anterior temporal region), aiming to encourage recovery or arrest decline even years after stroke occurs. PMID:28444235
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
Why Dose Frequency Affects Spoken Vocabulary in Preschoolers with Down Syndrome
ERIC Educational Resources Information Center
Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth
2015-01-01
In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…
"Jaja" in Spoken German: Managing Knowledge Expectations
ERIC Educational Resources Information Center
Taleghani-Nikazm, Carmen; Golato, Andrea
2016-01-01
In line with the other contributions to this issue on teaching pragmatics, this paper provides teachers of German with a two-day lesson plan for integrating authentic spoken language and its associated cultural background into their teaching. Specifically, the paper discusses how "jaja" and its phonetic variants are systematically used…
AI-Based Chatterbots and Spoken English Teaching: A Critical Analysis
ERIC Educational Resources Information Center
Sha, Guoquan
2009-01-01
The aim of various approaches implemented, whether the classical "three Ps" (presentation, practice, and production) or communicative language teaching (CLT), is to achieve communicative competence. Although a lot of software developed for teaching spoken English is dressed up to raise interaction, its methodology is largely rooted in tradition.…
ERIC Educational Resources Information Center
Verhoeven, Ludo; Steenge, Judit; van Leeuwe, Jan; van Balkom, Hans
2017-01-01
In this study, we investigated which componential skills can be distinguished in the second language (L2) development of 140 bilingual children with specific language impairment in the Netherlands, aged 6-11 years, divided into 3 age groups. L2 development was assessed by means of spoken language tasks representing different language skills…
ERIC Educational Resources Information Center
Hansen, Lynne
2011-01-01
Recent years have brought increasing attention to studies of language acquisition in a country where the language is spoken, as opposed to formal language study in classrooms. Research on language learners in immersion contexts is important, as the question of whether study abroad is valuable is still somewhat controversial among researchers…
ERIC Educational Resources Information Center
Phillippe, Denise E.
2012-01-01
At Concordia Language Villages, language and culture are inextricably intertwined, as they are in life. Participants "live" and "do" language and culture 16 hours per day. The experiential, residential setting immerses the participants in the culture of the country or countries where the target language is spoken through food,…
Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language
ERIC Educational Resources Information Center
Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary
2015-01-01
Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…
Alsatian versus Standard German: Regional Language Bilingual Primary Education in Alsace
ERIC Educational Resources Information Center
Harrison, Michelle Anne
2016-01-01
This article examines the current situation of regional language bilingual primary education in Alsace and contends that the regional language presents a special case in the context of France. The language comprises two varieties: Alsatian, which traditionally has been widely spoken, and Standard German, used as the language of reference and…
Language and Literacy: The Case of India.
ERIC Educational Resources Information Center
Sridhar, Kamal K.
Language and literacy issues in India are reviewed in terms of background, steps taken to combat illiteracy, and some problems associated with literacy. The following facts are noted: India has 106 languages spoken by more than 685 million people, there are several minor script systems, a major language has different dialects, a language may use…
English Language Learners. What Works Clearinghouse Topic Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2007
2007-01-01
English language learners are students with a primary language other than English who have a limited range of speaking, reading, writing, and listening skills in English. English language learners also include students identified and determined by their school as having limited English proficiency and a language other than English spoken in the…
Beyond Languages, beyond Modalities: Transforming the Study of Semiotic Repertoires
ERIC Educational Resources Information Center
Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina
2017-01-01
This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…
Spanish as a Second Language when L1 Is Quechua: Endangered Languages and the SLA Researcher
ERIC Educational Resources Information Center
Kalt, Susan E.
2012-01-01
Spanish is one of the most widely spoken languages in the world. Quechua is the largest indigenous language family to constitute the first language (L1) of second language (L2) Spanish speakers. Despite sheer number of speakers and typologically interesting contrasts, Quechua-Spanish second language acquisition is a nearly untapped research area,…
Vanhoucke, Elodie; Cousin, Emilie; Baciu, Monica
2013-03-01
Growing evidence suggests that age impacts on interhemispheric representation of language. Dichotic listening test allows assessing language lateralization for spoken language and it generally reveals right-ear/left-hemisphere (LH) predominance for language in young adult subjects. According to reported results, elderly would display increasing LH predominance in some studies or stable LH language lateralization for language in others ones. The aim of this study was to depict the main pattern of results in respect with the effect of normal aging on the hemisphere specialization for language by using dichotic listening test. A meta-analysis based on 11 studies has been performed. The inter-hemisphere asymmetry does not seem to increase according to age. A supplementary qualitative analysis suggests that right-ear advantage seems to increase between 40 and 49 y old and becomes stable or decreases after 55 y old, suggesting right-ear/LH decline.
Cross-Language Distributions of High Frequency and Phonetically Similar Cognates
Schepens, Job; Dijkstra, Ton; Grootjen, Franc; van Heuven, Walter J. B.
2013-01-01
The coinciding form and meaning similarity of cognates, e.g. ‘flamme’ (French), ‘Flamme’ (German), ‘vlam’ (Dutch), meaning ‘flame’ in English, facilitates learning of additional languages. The cross-language frequency and similarity distributions of cognates vary according to evolutionary change and language contact. We compare frequency and orthographic (O), phonetic (P), and semantic similarity of cognates, automatically identified in semi-complete lexicons of six widely spoken languages. Comparisons of P and O similarity reveal inconsistent mappings in language pairs with deep orthographies. The frequency distributions show that cognate frequency is reduced in less closely related language pairs as compared to more closely related languages (e.g., French-English vs. German-English). These frequency and similarity patterns may support a better understanding of cognate processing in natural and experimental settings. The automatically identified cognates are available in the supplementary materials, including the frequency and similarity measurements. PMID:23675449
The Potential of Elicited Imitation for Oral Output Practice in German L2
ERIC Educational Resources Information Center
Cornillie, Frederik; Baten, Kristof; De Hertog, Dirk
2017-01-01
This paper reports on the potential of Oral Elicited Imitation (OEI) as a format for output practice, building on an analysis of picture-matching and spoken data collected from 36 university-level learners of German as a second language (L2) in a web-based assessment task inspired by Input Processing (VanPatten, 2004). The design and development…
Phoneme Restoration Methods Reveal Prosodic Influences on Syntactic Parsing: Data from Bulgarian
ERIC Educational Resources Information Center
Stoyneshka-Raleva, Iglika
2013-01-01
This dissertation introduces and evaluates a new methodology for studying aspects of human language processing and the factors to which it is sensitive. It makes use of the phoneme restoration illusion (Warren, 1970). A small portion of a spoken sentence is replaced by a burst of noise. Listeners typically mentally restore the missing phoneme(s),…
Semiotic diversity in utterance production and the concept of 'language'.
Kendon, Adam
2014-09-19
Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Is there an effect of dysphonic teachers' voices on children's processing of spoken language?
Rogerson, Jemma; Dodd, Barbara
2005-03-01
There is a vast body of literature on the causes, prevalence, implications, and issues of vocal dysfunction in teachers. However, the educational effect of teacher vocal impairment is largely unknown. The purpose of this study was to investigate the effect of impaired voice quality on children's processing of spoken language. One hundred and seven children (age range, 9.2 to 10.6, mean 9.8, SD 3.76 months) listened to three video passages, one read in a control voice, one in a mild dysphonic voice, and one in a severe dysphonic voice. After each video passage, children were asked to answer six questions, with multiple-choice answers. The results indicated that children's perceptions of speech across the three voice qualities differed, regardless of gender, IQ, and school attended. Performance in the control voice passages was better than performance in the mild and severe dysphonic voice passages. No difference was found between performance in the mild and severe dysphonic voice passages, highlighting that any form of vocal impairment is detrimental to children's speech processing and is therefore likely to have a negative educational effect. These findings, in light of the high rate of vocal dysfunction in teachers, further support the implementation of specific voice care education for those in the teaching profession.
Australian Aboriginal Deaf People and Aboriginal Sign Language
ERIC Educational Resources Information Center
Power, Des
2013-01-01
Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…
Krio Language Manual. Revised Edition.
ERIC Educational Resources Information Center
Peace Corps, Freetown (Sierra Leone).
Instructional materials for Krio, the creole spoken in Sierra Leone, are designed for Peace Corps volunteer language instruction and intended for the use of both students and instructors. Fifty-six units provide practice in language skills, particularly oral, geared to the daily language needs of volunteers. Lessons are designed for audio-lingual…
Auditory Technology and Its Impact on Bilingual Deaf Education
ERIC Educational Resources Information Center
Mertes, Jennifer
2015-01-01
Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…
ERIC Educational Resources Information Center
McFerren, Margaret
A survey of the status of language usage in Iran begins with an overview of the usage pattern of Persian, the official language spoken by just over half the population, and the competing languages of six ethnic and linguistic minorities: Azerbaijani, Kurdish, Arabic, Gilaki, Luri-Bakhtiari, and Mazandarani. The development of language policy…
El Espanol como Idioma Universal (Spanish as a Universal Language)
ERIC Educational Resources Information Center
Mijares, Jose
1977-01-01
A proposal to transform Spanish into a universal language because it possesses the prerequisites: it is a living language, spoken in several countries; it is a natural language; and it uses the ordinary alphabet. Details on simplification and standardization are given. (Text is in Spanish.) (AMH)
The Role of Pronunciation in SENCOTEN Language Revitalization
ERIC Educational Resources Information Center
Bird, Sonya; Kell, Sarah
2017-01-01
Most Indigenous language revitalization programs in Canada currently emphasize spoken language. However, virtually no research has been done on the role of pronunciation in the context of language revitalization. This study set out to gain an understanding of attitudes around pronunciation in the SENCOTEN-speaking community, in order to determine…
Cultural Pluralism in Japan: A Sociolinguistic Outline.
ERIC Educational Resources Information Center
Honna, Nobuyuki
1980-01-01
Addressing the common misconception that Japan is a mono-ethnic, mono-cultural, and monolingual society, this article focuses on several areas of sociolinguistic concern. It discusses: (1) the bimodalism of the Japanese deaf population between Japanese Sign Language as native language and Japanese Spoken Language as acquired second language; (2)…
Audience Effects in American Sign Language Interpretation
ERIC Educational Resources Information Center
Weisenberg, Julia
2009-01-01
There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…
Language-in-Education Policies in the Catalan Language Area
ERIC Educational Resources Information Center
Vila i Moreno, F. Xavier
2008-01-01
The territories where Catalan is traditionally spoken as a native language constitute an attractive sociolinguistic laboratory which appears especially interesting from the point of view of language-in-education policies. The educational system has spearheaded the recovery of Catalan during the last 20 years. Schools are being attributed most of…
The Abakua Secret Society in Cuba: Language and Culture.
ERIC Educational Resources Information Center
Cedeno, Rafael A. Nunez
1988-01-01
Reports on attempts to determine whether Cuban Abakua is a pidginized Afro-Spanish, creole, or dead language and concludes that some of this language, spoken by a secret society, has its roots in Efik, a language of the Benue-Congo, and seems to be a simple, ritualistic, structureless argot. (CB)
Language Planning for Venezuela: The Role of English.
ERIC Educational Resources Information Center
Kelsey, Irving; Serrano, Jose
A rationale for teaching foreign languages in Venezuelan schools is discussed. An included sociolinguistic profile of Venezuela indicates that Spanish is the sole language of internal communication needs. Other languages spoken in Venezuela serve primarily a group function among the immigrant and indigenous communities. However, the teaching of…
Spelling Well Despite Developmental Language Disorder: What Makes it Possible?
Rakhlin, Natalia; Cardoso-Martins, Cláudia; Kornilov, Sergey A.; Grigorenko, Elena L.
2013-01-01
The goal of the study was to investigate the overlap between Developmental Language Disorder (DLD) and Developmental Dyslexia, identified through spelling difficulties (SD), in Russian-speaking children. In particular, we studied the role of phoneme awareness (PA), rapid automatized naming (RAN), pseudoword repetition (PWR), morphological (MA) and orthographic awareness (OA) in differentiating between children with DLD who have SD from children with DLD who are average spellers by comparing the two groups to each other, to typically developing children as well as children with SD but without spoken language deficits. One hundred forty nine children, aged 10.40 to 14.00, participated in the study. The results indicated that the SD, DLD, and DLD/SD groups did not differ from each other on PA and RAN Letters and underperformed in comparison to the control groups. However, whereas the children with written language deficits (SD and DLD/SD groups) underperformed on RAN Objects and Digits, PWR, OA and MA, the children with DLD and no SD performed similarly to the children from the control groups on these measures. In contrast, the two groups with spoken language deficits (DLD and DLD/SD) underperformed on RAN Colors in comparison to the control groups and the group of children with SD only. The results support the notion that those children with DLD who have unimpaired PWR and RAN skills are able to overcome their weaknesses in spoken language and PA and acquire basic literacy on a par with their age peers with typical language. We also argue that our findings support a multifactorial model of developmental language disorders (DLD). PMID:23860907
A task-dependent causal role for low-level visual processes in spoken word comprehension.
Ostarek, Markus; Huettig, Falk
2017-08-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The link between form and meaning in American Sign Language: lexical processing effects.
Thompson, Robin L; Vinson, David P; Vigliocco, Gabriella
2009-03-01
Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of American Sign Language (ASL). The results show that native ASL signers are faster to respond when a specific property iconically represented in a sign is made salient in the corresponding picture, thus providing evidence that a closer mapping between meaning and form can aid in lexical retrieval. While late 2nd-language learners appear to use iconicity as an aid to learning sign (R. Campbell, P. Martin, & T. White, 1992), they did not show the same facilitation effect as native ASL signers, suggesting that the task tapped into more automatic language processes. Overall, the findings suggest that completely arbitrary mappings between meaning and form may not be more advantageous in language and that, rather, arbitrariness may simply be an accident of modality. (c) 2009 APA, all rights reserved
The Hebrew CHILDES corpus: transcription and morphological analysis
Albert, Aviad; MacWhinney, Brian; Nir, Bracha
2014-01-01
We present a corpus of transcribed spoken Hebrew that reflects spoken interactions between children and adults. The corpus is an integral part of the CHILDES database, which distributes similar corpora for over 25 languages. We introduce a dedicated transcription scheme for the spoken Hebrew data that is sensitive to both the phonology and the standard orthography of the language. We also introduce a morphological analyzer that was specifically developed for this corpus. The analyzer adequately covers the entire corpus, producing detailed correct analyses for all tokens. Evaluation on a new corpus reveals high coverage as well. Finally, we describe a morphological disambiguation module that selects the correct analysis of each token in context. The result is a high-quality morphologically-annotated CHILDES corpus of Hebrew, along with a set of tools that can be applied to new corpora. PMID:25419199
Understanding environmental sounds in sentence context.
Uddin, Sophia; Heald, Shannon L M; Van Hedger, Stephen C; Klos, Serena; Nusbaum, Howard C
2018-03-01
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Copyright © 2017 Elsevier B.V. All rights reserved.
Individual Differences in the Real-Time Comprehension of Children with ASD
Venker, Courtney E.; Eernisse, Elizabeth R.; Saffran, Jenny R.; Weismer, Susan Ellis
2013-01-01
Lay Abstract Spoken language processing is related to language and cognitive skills in typically developing children, but very little is known about how children with autism spectrum disorders (ASD) comprehend words in real time. Studying this area is important because it may help us understand why many children with autism have delayed language comprehension. Thirty-four children with ASD (3–6 years old) participated in this study. They took part in a language comprehension task that involved looking at pictures on a screen and listening to questions about familiar nouns (e.g., Where’s the shoe?). Children as a group understood the familiar words, but accuracy and processing speed varied considerably across children. The children who were more accurate were also faster to process the familiar words. Children’s language processing accuracy was related to processing speed and language comprehension on a standardized test; nonverbal cognition did not explain additional information after accounting for these factors. Additionally, lexical processing accuracy at age 5½ was related to children’s vocabulary comprehension three years earlier, at age 2½. Autism severity and years of maternal education were unrelated to language processing. Words typically acquired earlier in life were processed more quickly than words acquired later. These findings point to similarities in patterns of language development in typically developing children and children with ASD. Studying real-time comprehension in children with ASD may help us better understand mechanisms of language comprehension in this population. Future work may help explain why some children with ASD develop age-appropriate language skills, whereas others experience lasting deficits. Scientific Abstract Many children with autism spectrum disorders (ASD) demonstrate deficits in language comprehension, but little is known about how they process spoken language as it unfolds. Real-time lexical comprehension is associated with language and cognition in children without ASD, suggesting that this may also be the case for children with ASD. This study adopted an individual differences approach to characterizing real-time comprehension of familiar words in a group of 34 three- to six-year-olds with ASD. The looking-while-listening paradigm was employed; it measures online accuracy and latency through language-mediated eye movements and has limited task demands. On average, children demonstrated comprehension of the familiar words, but considerable variability emerged. Children with better accuracy were faster to process the familiar words. In combination, processing speed and comprehension on a standardized language assessment explained 63% of the variance in online accuracy. Online accuracy was not correlated with autism severity or maternal education, and nonverbal cognition did not explain unique variance. Notably, online accuracy at age 5½ was related to vocabulary comprehension three years earlier. The words typically learned earliest in life were processed most quickly. Consistent with a dimensional view of language abilities, these findings point to similarities in patterns of language acquisition in typically developing children and those with ASD. Overall, our results emphasize the value of examining individual differences in real-time language comprehension in this population. We propose that the looking-while-listening paradigm is a sensitive and valuable methodological tool that can be applied across many areas of autism research. PMID:23696214
ERIC Educational Resources Information Center
Castro, Paloma; Sercu, Lies; Mendez Garcia, Maria del Carmen
2004-01-01
A recent shift has been noticeable in foreign language education theory. Previously, foreign languages were taught as a linguistic code. This then shifted to teaching that code against the sociocultural background of, primarily, one country in which the foreign language is spoken as a national language. More recently, teaching has reflected on…
Kanto, Laura; Huttunen, Kerttu; Laakso, Marja-Leena
2013-04-01
We explored variation in the linguistic environments of hearing children of Deaf parents and how it was associated with their early bilingual language development. For that purpose we followed up the children's productive vocabulary (measured with the MCDI; MacArthur Communicative Development Inventory) and syntactic complexity (measured with the MLU10; mean length of the 10 longest utterances the child produced during videorecorded play sessions) in both Finnish Sign Language and spoken Finnish between the ages of 12 and 30 months. Additionally, we developed new methodology for describing the linguistic environments of the children (N = 10). Large variation was uncovered in both the amount and type of language input and language acquisition among the children. Language exposure and increases in productive vocabulary and syntactic complexity were interconnected. Language acquisition was found to be more dependent on the amount of exposure in sign language than in spoken language. This was judged to be related to the status of sign language as a minority language. The results are discussed in terms of parents' language choices, family dynamics in Deaf-parented families and optimal conditions for bilingual development.
Language Shift or Increased Bilingualism in South Africa: Evidence from Census Data
ERIC Educational Resources Information Center
Posel, Dorrit; Zeller, Jochen
2016-01-01
In the post-apartheid era, South Africa has adopted a language policy that gives official status to 11 languages (English, Afrikaans, and nine Bantu languages). However, English has remained the dominant language of business, public office, and education, and some research suggests that English is increasingly being spoken in domestic settings.…
A Grammar of Sierra Popoluca (Soteapanec, a Mixe-Zoquean Language)
ERIC Educational Resources Information Center
de Jong Boudreault, Lynda J.
2009-01-01
This dissertation is a comprehensive description of the grammar of Sierra Popoluca (SP, aka Soteapanec), a Mixe-Zoquean language spoken by approximately 28,000 people in Veracruz, Mexico. This grammar begins with an introduction to the language, its language family, a typological overview of the language, a brief history of my fieldwork, and the…
Uncertainty in the Community Language Classroom: A Response to Michael Clyne.
ERIC Educational Resources Information Center
Stuart-Smith, Jane
1997-01-01
Response to an article on community languages in Australia supports the argument that community language speakers do not have an advantage over non-speakers in the community language classroom, but can be disadvantaged by differences between the language taught in the classroom and that spoken in homes. Examples are drawn from Punjabi instruction…
A Corpus-Based Study on Turkish Spoken Productions of Bilingual Adults
ERIC Educational Resources Information Center
Agçam, Reyhan; Bulut, Adem
2016-01-01
The current study investigated whether monolingual adult speakers of Turkish and bilingual adult speakers of Arabic and Turkish significantly differ regarding their spoken productions in Turkish. Accordingly, two groups of undergraduate students studying Turkish Language and Literature at a state university in Turkey were presented two videos on a…
Effects of Prosody and Position on the Timing of Deictic Gestures
ERIC Educational Resources Information Center
Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M.; Szuminsky, Neil
2013-01-01
Purpose: In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. Method: The…
Spoken English. "Educational Review" Occasional Publications Number Two.
ERIC Educational Resources Information Center
Wilkinson, Andrew; And Others
Modifications of current assumptions both about the nature of the spoken language and about its functions in relation to personality development are suggested in this book. The discussion covers an explanation of "oracy" (the oral skills of speaking and listening); the contributions of linguistics to the teaching of English in Britain; the…
Monitoring the Performance of Human and Automated Scores for Spoken Responses
ERIC Educational Resources Information Center
Wang, Zhen; Zechner, Klaus; Sun, Yu
2018-01-01
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…
Phonological Awareness: Explicit Instruction for Young Deaf and Hard-of-Hearing Children
ERIC Educational Resources Information Center
Miller, Elizabeth M.; Lederberg, Amy R.; Easterbrooks, Susan R.
2013-01-01
The goal of this study was to explore the development of spoken phonological awareness for deaf and hard-of-hearing children (DHH) with functional hearing (i.e., the ability to access spoken language through hearing). Teachers explicitly taught five preschoolers the phonological awareness skills of syllable segmentation, initial phoneme isolation,…
THE STRUCTURE OF AYACUCHO QUECHUA.
ERIC Educational Resources Information Center
PARKER, GARY J.; SOLA, DONALD F.
THIS LINGUISTIC DESCRIPTION OF AYACUCHO QUECHUA IS INTENDED TO BE A FAIRLY COMPLETE ANALYSIS OF THE SPOKEN LANGUAGE. USED WITH THE AUTHORS' SPOKEN AYACUCHO QUECHUA COURSE, IT IS A COMPREHENSIVE REFERENCE WORK FOR THE STUDENT AS WELL AS A CONTRIBUTION TO THE FIELD OF DESCRIPTIVE LINGUISTICS. BECAUSE OF THE HIGH DEGREE OF INFLECTION AND SYNTACTIC…
Teaching English as a "Second Language" in Kenya and the United States: Convergences and Divergences
ERIC Educational Resources Information Center
Roy-Campbell, Zaline M.
2015-01-01
English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…
Hampton Wray, Amanda; Weber-Fox, Christine
2013-07-01
The neural activity mediating language processing in young children is characterized by large individual variability that is likely related in part to individual strengths and weakness across various cognitive abilities. The current study addresses the following question: How does proficiency in specific cognitive and language functions impact neural indices mediating language processing in children? Thirty typically developing seven- and eight-year-olds were divided into high-normal and low-normal proficiency groups based on performance on nonverbal IQ, auditory word recall, and grammatical morphology tests. Event-related brain potentials (ERPs) were elicited by semantic anomalies and phrase structure violations in naturally spoken sentences. The proficiency for each of the specific cognitive and language tasks uniquely contributed to specific aspects (e.g., timing and/or resource allocation) of neural indices underlying semantic (N400) and syntactic (P600) processing. These results suggest that distinct aptitudes within broader domains of cognition and language, even within the normal range, influence the neural signatures of semantic and syntactic processing. Furthermore, the current findings have important implications for the design and interpretation of developmental studies of ERPs indexing language processing, and they highlight the need to take into account cognitive abilities both within and outside the classic language domain. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kukona, Anuenue; Tabor, Whitney
2011-01-01
The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355
Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing
Rauschecker, Josef P; Scott, Sophie K
2010-01-01
Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production. PMID:19471271
Cascading activation from lexical processing to letter-level processing in written word production.
Buchwald, Adam; Falconer, Carolyn
2014-01-01
Descriptions of language production have identified processes involved in producing language and the presence and type of interaction among those processes. In the case of spoken language production, consensus has emerged that there is interaction among lexical selection processes and phoneme-level processing. This issue has received less attention in written language production. In this paper, we present a novel analysis of the writing-to-dictation performance of an individual with acquired dysgraphia revealing cascading activation from lexical processing to letter-level processing. The individual produced frequent lexical-semantic errors (e.g., chipmunk → SQUIRREL) as well as letter errors (e.g., inhibit → INBHITI) and had a profile consistent with impairment affecting both lexical processing and letter-level processing. The presence of cascading activation is suggested by lower letter accuracy on words that are more weakly activated during lexical selection than on those that are more strongly activated. We operationalize weakly activated lexemes as those lexemes that are produced as lexical-semantic errors (e.g., lethal in deadly → LETAHL) compared to strongly activated lexemes where the intended target word (e.g., lethal) is the lexeme selected for production.
Natural language processing of spoken diet records (SDRs).
Lacson, Ronilda; Long, William
2006-01-01
Dietary assessment is a fundamental aspect of nutritional evaluation that is essential for management of obesity as well as for assessing dietary impact on chronic diseases. Various methods have been used for dietary assessment including written records, 24-hour recalls, and food frequency questionnaires. The use of mobile phones to provide real-time dietary records provides potential advantages for accessibility, ease of use and automated documentation. However, understanding even a perfect transcript of spoken dietary records (SDRs) is challenging for people. This work presents a first step towards automatic analysis of SDRs. Our approach consists of four steps - identification of food items, identification of food quantifiers, classification of food quantifiers and temporal annotation. Our method enables automatic extraction of dietary information from SDRs, which in turn allows automated mapping to a Diet History Questionnaire dietary database. Our model has an accuracy of 90%. This work demonstrates the feasibility of automatically processing SDRs.
Guest Comment: Universal Language Requirement.
ERIC Educational Resources Information Center
Sherwood, Bruce Arne
1979-01-01
Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)
Discussion Forum Interactions: Text and Context
ERIC Educational Resources Information Center
Montero, Begona; Watts, Frances; Garcia-Carbonell, Amparo
2007-01-01
Computer-mediated communication (CMC) is currently used in language teaching as a bridge for the development of written and spoken skills [Kern, R., 1995. "Restructuring classroom interaction with networked computers: effects on quantity and characteristics of language production." "The Modern Language Journal" 79, 457-476]. Within CMC…
An Introduction to Spoken Setswana.
ERIC Educational Resources Information Center
Mistry, Karen S.
A guide to instruction in Setswana, the most widely dispersed Bantu language in Southern Africa, includes general material about the language, materials for the teacher, 163 lessons, vocabulary lists, and supplementary materials and exercises. Introductory material about the language discusses its distribution and characteristics, and orthography.…
Where Should We Look for Language?
ERIC Educational Resources Information Center
Stokoe, William C.
1986-01-01
Argues that the beginnings of language need to be sought not in the universal abstract grammar proposed by Chomsky but in the evolution of the everyday interaction of the human species. Studies indicate that there is no great gulf between spoken language and nonverbal communication. (SED)
ERIC Educational Resources Information Center
Hyslop, Gwendolyn
2011-01-01
Kurtop is a Tibeto-Burman language spoken by approximately 15,000 people in Northeastern Bhutan. This dissertation is the first descriptive grammar of the language, based on extensive fieldwork and community-driven language documentation in Bhutan. When possible, analyses are presented in typological and historical/comparative perspectives and…
Berber Dialects. Materials Status Report.
ERIC Educational Resources Information Center
Center for Applied Linguistics, Washington, DC. Language/Area Reference Center.
The materials status report for the Berber languages, minority languages spoken in northern Africa, is one of a series intended to provide the nonspecialist with a picture of the availability and quality of texts for teaching various languages to English speakers. The report consists of: (1) a brief narrative description of the Berber language,…
The Unified Phonetic Transcription for Teaching and Learning Chinese Languages
ERIC Educational Resources Information Center
Shieh, Jiann-Cherng
2011-01-01
In order to preserve distinctive cultures, people anxiously figure out writing systems of their languages as recording tools. Mandarin, Taiwanese and Hakka languages are three major and the most popular dialects of Han languages spoken in Chinese society. Their writing systems are all in Han characters. Various and independent phonetic…
ERIC Educational Resources Information Center
Wilson, Leanne; McNeill, Brigid; Gillon, Gail T.
2015-01-01
Successful collaboration among speech and language therapists (SLTs) and teachers fosters the creation of communication friendly classrooms that maximize children's spoken and written language learning. However, these groups of professionals may have insufficient opportunity in their professional study to develop the shared knowledge, perceptions…
Dilemmatic Aspects of Language Policies in a Trilingual Preschool Group
ERIC Educational Resources Information Center
Puskás, Tünde; Björk-Willén, Polly
2017-01-01
This article explores dilemmatic aspects of language policies in a preschool group in which three languages (Swedish, Romani and Arabic) are spoken on an everyday basis. The article highlights the interplay between policy decisions on the societal level, the teachers' interpretations of these policies, as well as language practices on the micro…
Making a Difference: Language Teaching for Intercultural and International Dialogue
ERIC Educational Resources Information Center
Byram, Michael; Wagner, Manuela
2018-01-01
Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…
Kornai, András
2013-01-01
Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559
Politeness Strategies among Native and Romanian Speakers of English
ERIC Educational Resources Information Center
Ambrose, Dominic
1995-01-01
Background: Politeness strategies vary from language to language and within each society. At times the wrong strategies can have disastrous effects. This can occur when languages are used by non-native speakers or when they are used outside of their own home linguistic context. Purpose: This study of spoken language compares the politeness…
Vernacular Literacy in the Touo Language of the Solomon Islands
ERIC Educational Resources Information Center
Dunn, Michael
2005-01-01
The Touo language is a non-Austronesian language spoken on Rendova Island (Western Province, Solomon Islands). First language speakers of Touo are typically multilingual, and are likely to speak other (Austronesian) vernaculars, as well as Solomon Island Pijin and English. There is no institutional support of literacy in Touo: schools function in…
Documenting Indigenous Knowledge and Languages: Research Planning & Protocol.
ERIC Educational Resources Information Center
Leonard, Beth
2001-01-01
The author's experiences of learning her heritage language of Deg Xinag, an Athabascan language spoken in Alaska, serve as a backdrop for discussing issues in learning endangered indigenous languages. When Deg Xinag is taught by linguists, obvious differences between English and Deg Xinag are not articulated, due to the lack of knowledge of…
Kaqchikel Maya Language Analysis Project
ERIC Educational Resources Information Center
Eddy de Pappa, Sarah
2010-01-01
The purpose of this analysis was to study the linguistic features of Kaqchikel, a Mayan language currently spoken in Guatemala and increasingly in the United States, in an effort to better prepare teachers of English as a second language (ESL) or English as a foreign language (EFL) to address the distinct needs of a frequently neglected and…
Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation
ERIC Educational Resources Information Center
Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy
2016-01-01
Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…
ERIC Educational Resources Information Center
Godwin-Jones, Robert
2008-01-01
Creating effective electronic tools for language learning frequently requires large data sets containing extensive examples of actual human language use. Collections of authentic language in spoken and written forms provide developers the means to enrich their applications with real world examples. As the Internet continues to expand…
Language and Literacy Development of Deaf and Hard-of-Hearing Children: Successes and Challenges
ERIC Educational Resources Information Center
Lederberg, Amy R.; Schick, Brenda; Spencer, Patricia E.
2013-01-01
Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to…
The Pursuit of Language Appropriate Care: Remote Simultaneous Medical Interpretation Use
ERIC Educational Resources Information Center
Logan, Debra M.
2010-01-01
Background: The U.S. government mandates nurses to deliver linguistically appropriate care to hospital patients. It is difficult for nurses to implement the language mandates because there are 6,912 active living languages spoken in the world. Language barriers appear to place limited English proficient (LEP) patients at increased risk for harm…
Language Education Policies and Inequality in Africa: Cross-National Empirical Evidence
ERIC Educational Resources Information Center
Coyne, Gary
2015-01-01
This article examines the relationship between inequality and education through the lens of colonial language education policies in African primary and secondary school curricula. The languages of former colonizers almost always occupy important places in society, yet they are not widely spoken as first languages, meaning that most people depend…
Spoken Oral Language and Adult Struggling Readers
ERIC Educational Resources Information Center
Bakhtiari, Dariush; Greenberg, Daphne; Patton-Terry, Nicole; Nightingale, Elena
2015-01-01
Oral language is a critical component to the development of reading acquisition. Much of the research concerning the relationship between oral language and reading ability is focused on children, while there is a paucity of research focusing on this relationship for adults who struggle with their reading. Oral language as defined in this paper…
Rhythm in language acquisition.
Langus, Alan; Mehler, Jacques; Nespor, Marina
2017-10-01
Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.
Singing can facilitate foreign language learning.
Ludke, Karen M; Ferreira, Fernanda; Overy, Katie
2014-01-01
This study presents the first experimental evidence that singing can facilitate short-term paired-associate phrase learning in an unfamiliar language (Hungarian). Sixty adult participants were randomly assigned to one of three "listen-and-repeat" learning conditions: speaking, rhythmic speaking, or singing. Participants in the singing condition showed superior overall performance on a collection of Hungarian language tests after a 15-min learning period, as compared with participants in the speaking and rhythmic speaking conditions. This superior performance was statistically significant (p < .05) for the two tests that required participants to recall and produce spoken Hungarian phrases. The differences in performance were not explained by potentially influencing factors such as age, gender, mood, phonological working memory ability, or musical ability and training. These results suggest that a "listen-and-sing" learning method can facilitate verbatim memory for spoken foreign language phrases.
Vocal Interaction between Children with Down syndrome and their Parents
Thiemann-Bourque, Kathy S.; Warren, Steven F.; Brady, Nancy; Gilkerson, Jill; Richards, Jeffrey A.
2014-01-01
Purpose The purpose of this study was to describe differences in parent input and child vocal behaviors of children with Down syndrome (DS) compared to typically developing (TD) children. The goals were to describe the language learning environments at distinctly different ages in early childhood. Method Nine children with DS and 9 age-matched TD children participated; four children in each group were ages 9–11 months and five were between 25–54 months. Measures were derived from automated vocal analysis. A digital language processer measured the richness of the child’s language environment, including number of adult words, conversational turns, and child vocalizations. Results Analyses indicated no significant differences in words spoken by parents of younger vs. older children with DS, and significantly more words spoken by parents of TD children than parents of children with DS. Differences between the DS and TD groups were observed in rates of all vocal behaviors; with no differences noted between the younger vs. older children with DS, and the younger TD children did not vocalize significantly more than the younger DS children. Conclusions Parents of children with DS continue to provide consistent levels of input across the early language learning years; however, child vocal behaviors remain low after the age of 24 months suggesting the need for additional and alternative intervention approaches. PMID:24686777
Gesture, sign, and language: The coming of age of sign language and gesture studies.
Goldin-Meadow, Susan; Brentari, Diane
2017-01-01
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
ERIC Educational Resources Information Center
Caplan, David; Waters, Gloria; Bertram, Julia; Ostrowski, Adam; Michaud, Jennifer
2016-01-01
The authors assessed 4,865 middle and high school students for the ability to recognize and understand written and spoken morphologically simple words, morphologically complex words, and the syntactic structure of sentences and for the ability to answer questions about facts presented in a written passage and to make inferences based on those…
ERIC Educational Resources Information Center
Poole, Deborah
2008-01-01
This paper focuses on the process of literacy socialization in several 5th grade reading groups. Through close analysis of spoken interaction, which centers on a heavily illustrated, non-fiction text, the paper proposes that these reading groups can be seen as complex sites of socialization to the values associated with essayist literacy (i.e.,…
ERIC Educational Resources Information Center
Salazar, John H.
The process of self-identification by persons of Mexican and other Spanish ancestry and its relationship to reference group theory is discussed. The study examines the relationship patterns between such indepedent variables as age, sex, years of formal education, birthplace, birthplace of parents, and language spoken in the home with various forms…
ERIC Educational Resources Information Center
Segal, Osnat; Kishon-Rabin, Liat
2017-01-01
Purpose: The stressed word in a sentence (narrow focus [NF]) conveys information about the intent of the speaker and is therefore important for processing spoken language and in social interactions. The ability of participants with severe-to-profound prelingual hearing loss to comprehend NF has rarely been investigated. The purpose of this study…
ERIC Educational Resources Information Center
Halliday, L. F.; Bishop, D. V. M.
2006-01-01
Specific reading disability (SRD) is now widely recognised as often being caused by phonological processing problems, affecting analysis of spoken as well as written language. According to one theoretical account, these phonological problems are due to low-level problems in auditory perception of dynamic acoustic cues. Evidence for this has come…
The effect of written text on comprehension of spoken English as a foreign language.
Diao, Yali; Chandler, Paul; Sweller, John
2007-01-01
Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.
ERIC Educational Resources Information Center
Slavkov, Nikolay
2017-01-01
This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…
Key Data on Teaching Languages at School in Europe. 2017 Edition. Eurydice Report
ERIC Educational Resources Information Center
Baïdak, Nathalie; Balcon, Marie-Pascale; Motiejunaite, Akvile
2017-01-01
Linguistic diversity is part of Europe's DNA. It embraces not only the official languages of Member States, but also the regional and/or minority languages spoken for centuries on European territory, as well as the languages brought by the various waves of migrants. The coexistence of this variety of languages constitutes an asset, but it is also…
ERIC Educational Resources Information Center
Palmer, Barbara C.; Chen, Chia-I; Chang, Sara; Leclere, Judith T.
2006-01-01
According to the 2000 United States Census, Americans age five and older who speak a language other than English at home grew 47 percent over the preceding decade. This group accounts for slightly less than one in five Americans (17.9%). Among the minority languages spoken in the United States, Asian-language speakers, including Chinese and other…