Science.gov

Sample records for chinese spoken language

  1. Automatic translation among spoken languages

    NASA Astrophysics Data System (ADS)

    Walter, Sharon M.; Costigan, Kelly

    1994-02-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  2. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  3. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  4. What Do Second Language Listeners Know about Spoken Words? Effects of Experience and Attention in Spoken Word Processing

    ERIC Educational Resources Information Center

    Trofimovich, Pavel

    2008-01-01

    With a goal of investigating psycholinguistic bases of spoken word processing in a second language (L2), this study examined L2 learners' sensitivity to phonological information in spoken L2 words as a function of their L2 experience and attentional demands of a learning task. Fifty-two Chinese learners of English who differed in amount of L2…

  5. Predictors of spoken language learning

    PubMed Central

    Wong, Patrick C. M.; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We found those who were successful to have higher activation in bilateral auditory cortex, larger volume in Heschl’s Gyrus, and more accurate pitch pattern perception. All of these measures were performed before training began. In the second set of experiments, native English-speaking adults learned a phonological grammatical system governing the formation of words of an artificial language. Again, neurophysiological, neuroanatomical, and cognitive factors predicted to an extent how well these adults learned. Taken together, these experiments suggest that neural and behavioral factors can be used to predict spoken language learning. These predictors can inform the redesign of existing training paradigms to maximize learning for learners with different learning profiles. Learning outcomes: Readers will be able to: (a) understand the linguistic concepts of lexical tone and phonological grammar, (b) identify the brain regions associated with learning lexical tone and phonological grammar, and (c) identify the cognitive predictors for successful learning of a tone language and phonological rules. PMID:21601868

  6. Building Spoken Language in the First Plane

    ERIC Educational Resources Information Center

    Bettmann, Joen

    2016-01-01

    Through a strong Montessori orientation to the parameters of spoken language, Joen Bettmann makes the case for "materializing" spoken knowledge using the stimulation of real objects and real situations that promote mature discussion around the sensorial aspect of the prepared environment. She lists specific materials in the classroom…

  7. Developmental Phonological Disorders: Processing of Spoken Language.

    ERIC Educational Resources Information Center

    Dodd, Barbara; Basset, Barbara

    1987-01-01

    The ability of 22 phonologically disordered and normally speaking children's ability to process (phonologically, syntactically, and semantically) spoken language was evaluated. No differences between groups was found in number of errors, pattern of errors, or reaction times when monitoring sentences for target words, irrespective of sentence type.…

  8. Direction Asymmetries in Spoken and Signed Language Interpreting

    ERIC Educational Resources Information Center

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  9. Predictors of Spoken Language Learning

    ERIC Educational Resources Information Center

    Wong, Patrick C. M.; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We…

  10. Spoken Grammar and Its Role in the English Language Classroom

    ERIC Educational Resources Information Center

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  11. We Asked...You Told Us. Language Spoken at Home.

    ERIC Educational Resources Information Center

    Bureau of the Census (DOC), Washington, DC. Economics and Statistics Administration.

    Responses to the 1990 United States census question concerning languages other than English that are spoken at home are summarized. It was found that in 1990, 14 percent of the population 5 years and older spoke a language other than English at home, as contrasted with 11 percent in 1980. Languages spoken most commonly at home in descending order…

  12. Deep Bottleneck Features for Spoken Language Identification

    PubMed Central

    Jiang, Bing; Song, Yan; Wei, Si; Liu, Jun-Hua; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed. PMID:24983963

  13. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention

    ERIC Educational Resources Information Center

    Medwetsky, Larry

    2011-01-01

    Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…

  14. Spoken Language Research and ELT: Where Are We Now?

    ERIC Educational Resources Information Center

    Timmis, Ivor

    2012-01-01

    This article examines the relationship between spoken language research and ELT practice over the last 20 years. The first part is retrospective. It seeks first to capture the general tenor of recent spoken research findings through illustrative examples. The article then considers the sociocultural issues that arose when the relevance of these…

  15. Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on languages spoken by English learners (ELs) are: (1) Twenty most common EL languages, as reported in states' top five lists: SY 2013-14; (2) States,…

  16. Orthographic Facilitation Effects on Spoken Word Production: Evidence from Chinese

    ERIC Educational Resources Information Center

    Zhang, Qingfang; Weekes, Brendan Stuart

    2009-01-01

    The aim of this experiment was to investigate the time course of orthographic facilitation on picture naming in Chinese. We used a picture-word paradigm to investigate orthographic and phonological facilitation on monosyllabic spoken word production in native Mandarin speakers. Both the stimulus-onset asynchrony (SOA) and the picture-word…

  17. Usable, real-time, interactive spoken language systems

    NASA Astrophysics Data System (ADS)

    Makhoul, J.; Bates, M.

    1994-09-01

    The objective of this project was to make the next significant advance in human-machine interaction by developing a spoken language system (SLS) that operates in real-time while maintaining high accuracy on cost-effective COTS (commercial, off-the-shelf) hardware. The system has a highly interactive user interface, is largely user independent and to be easily portable to new applications. The BBN HARC spoken language system consists of Byblos speech recognition system and the Delphi or HUM language understanding system.

  18. Developmental differences in the influence of phonological similarity on spoken word processing in Mandarin Chinese.

    PubMed

    Malins, Jeffrey G; Gao, Danqi; Tao, Ran; Booth, James R; Shu, Hua; Joanisse, Marc F; Liu, Li; Desroches, Amy S

    2014-11-01

    The developmental trajectory of spoken word recognition has been well established in Indo-European languages, but to date remains poorly characterized in Mandarin Chinese. In this study, typically developing children (N=17; mean age 10; 5) and adults (N=17; mean age 24) performed a picture-word matching task in Mandarin while we recorded ERPs. Mismatches diverged from expectations in different components of the Mandarin syllable; namely, word-initial phonemes, word-final phonemes, and tone. By comparing responses to different mismatch types, we uncovered evidence suggesting that both children and adults process words incrementally. However, we also observed key developmental differences in how subjects treated onset and rime mismatches. This was taken as evidence for a stronger influence of top-down processing on spoken word recognition in adults compared to children. This work therefore offers an important developmental component to theories of Mandarin spoken word recognition. PMID:25278419

  19. Developmental Differences in the Influence of Phonological Similarity on Spoken Word Processing in Mandarin Chinese

    PubMed Central

    Malins, Jeffrey G.; Gao, Danqi; Tao, Ran; Booth, James R.; Shu, Hua; Joanisse, Marc F.; Liu, Li; Desroches, Amy S.

    2014-01-01

    The developmental trajectory of spoken word recognition has been well established in Indo-European languages, but to date remains poorly characterized in Mandarin Chinese. In this study, typically developing children (N = 17; mean age 10;5) and adults (N = 17; mean age 24) performed a picture-word matching task in Mandarin while we recorded ERPs. Mismatches diverged from expectations in different components of the Mandarin syllable; namely, word-initial phonemes, word-final phonemes, and tone. By comparing responses to different mismatch types, we uncovered evidence suggesting that both children and adults process words incrementally. However, we also observed key developmental differences in how subjects treated onset and rime mismatches. This was taken as evidence for a stronger influence of top-down processing on spoken word recognition in adults compared to children. This work therefore offers an important developmental component to theories of Mandarin spoken word recognition. PMID:25278419

  20. Sharing Spoken Language: Sounds, Conversations, and Told Stories

    ERIC Educational Resources Information Center

    Birckmayer, Jennifer; Kennedy, Anne; Stonehouse, Anne

    2010-01-01

    Infants and toddlers encounter numerous spoken story experiences early in their lives: conversations, oral stories, and language games such as songs and rhymes. Many adults are even surprised to learn that children this young need these kinds of natural language experiences at all. Adults help very young children take a step along the path toward…

  1. The roles of language processing in a spoken language interface.

    PubMed Central

    Hirschman, L

    1995-01-01

    This paper provides an overview of the colloquium's discussion session on natural language understanding, which followed presentations by M. Bates [Bates, M. (1995) Proc. Natl. Acad. Sci. USA 92, 9977-9982] and R. C. Moore [Moore, R. C. (1995) Proc. Natl. Acad. Sci. USA 92, 9983-9988]. The paper reviews the dual role of language processing in providing understanding of the spoken input and an additional source of constraint in the recognition process. To date, language processing has successfully provided understanding but has provided only limited (and computationally expensive) constraint. As a result, most current systems use a loosely coupled, unidirectional interface, such as N-best or a word network, with natural language constraints as a postprocess, to filter or resort the recognizer output. However, the level of discourse context provides significant constraint on what people can talk about and how things can be referred to; when the system becomes an active participant, it can influence this order. But sources of discourse constraint have not been extensively explored, in part because these effects can only be seen by studying systems in the context of their use in interactive problem solving. This paper argues that we need to study interactive systems to understand what kinds of applications are appropriate for the current state of technology and how the technology can move from the laboratory toward real applications. PMID:7479811

  2. Does textual feedback hinder spoken interaction in natural language?

    PubMed

    Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois

    2010-01-01

    The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated. PMID:20069480

  3. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    ERIC Educational Resources Information Center

    Geers, Anne E.; Nicholas, Johanna G.

    2013-01-01

    Purpose: In this article, the authors sought to determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12 and 38 months of age. Relative advantages of receiving a bilateral CI after age 4.5 years, better…

  4. Modular fuzzy-neuro controller driven by spoken language commands.

    PubMed

    Pulasinghe, Koliya; Watanabe, Keigo; Izumi, Kiyotaka; Kiguchi, Kazuo

    2004-02-01

    We present a methodology of controlling machines using spoken language commands. The two major problems relating to the speech interfaces for machines, namely, the interpretation of words with fuzzy implications and the out-of-vocabulary (OOV) words in natural conversation, are investigated. The system proposed in this paper is designed to overcome the above two problems in controlling machines using spoken language commands. The present system consists of a hidden Markov model (HMM) based automatic speech recognizer (ASR), with a keyword spotting system to capture the machine sensitive words from the running utterances and a fuzzy-neural network (FNN) based controller to represent the words with fuzzy implications in spoken language commands. Significance of the words, i.e., the contextual meaning of the words according to the machine's current state, is introduced to the system to obtain more realistic output equivalent to users' desire. Modularity of the system is also considered to provide a generalization of the methodology for systems having heterogeneous functions without diminishing the performance of the system. The proposed system is experimentally tested by navigating a mobile robot in real time using spoken language commands. PMID:15369072

  5. Associations among Play, Gesture and Early Spoken Language Acquisition

    ERIC Educational Resources Information Center

    Hall, Suzanne; Rumney, Lisa; Holler, Judith; Kidd, Evan

    2013-01-01

    The present study investigated the developmental interrelationships between play, gesture use and spoken language development in children aged 18-31 months. The children completed two tasks: (i) a structured measure of pretend (or "symbolic") play and (ii) a measure of vocabulary knowledge in which children have been shown to gesture.…

  6. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  7. Planum temporale: where spoken and written language meet.

    PubMed

    Nakada, T; Fujii, Y; Yoneoka, Y; Kwee, I L

    2001-01-01

    Functional magnetic resonance imaging studies on spoken versus written language processing were performed in 20 right-handed normal volunteers on a high-field (3.0-tesla) system. The areas activated in common by both auditory (listening) and visual (reading) language comprehension paradigms were mapped onto the planum temporale (20/20), primary auditory region (2/20), superior temporal sulcus area (2/20) and planum parietale (3/20). The study indicates that the planum temporale represents a common traffic area for cortical processing which needs to access the system of language comprehension. The destruction of this area can result in comprehension deficits in both spoken and written language, i.e. a classical case of Wernicke's aphasia. PMID:11598329

  8. Spoken Oral Language and Adult Struggling Readers

    ERIC Educational Resources Information Center

    Bakhtiari, Dariush; Greenberg, Daphne; Patton-Terry, Nicole; Nightingale, Elena

    2015-01-01

    Oral language is a critical component to the development of reading acquisition. Much of the research concerning the relationship between oral language and reading ability is focused on children, while there is a paucity of research focusing on this relationship for adults who struggle with their reading. Oral language as defined in this paper…

  9. Cognitive aging and hearing acuity: modeling spoken language comprehension

    PubMed Central

    Wingfield, Arthur; Amichetti, Nicole M.; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of “local” theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled. PMID:26124724

  10. Spoken Language Development in Children Following Cochlear Implantation

    PubMed Central

    Niparko, John K.; Tobey, Emily A.; Thal, Donna J.; Eisenberg, Laurie S.; Wang, Nae-Yuh; Quittner, Alexandra L.; Fink, Nancy E.

    2010-01-01

    Context Cochlear implantation (CI) is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe-to-profound sensorineural hearing loss (SNHL). Objective To prospectively assess spoken language acquisition following CI in young children with adjustment of co-variates. Design, Setting, and Participants Prospective, longitudinal, and multidimensional assessment of spoken language growth over a 3-year period following CI. Prospective cohort study of children who underwent CI before 5 years of age (n=188) from 6 US centers and hearing children of similar ages (n=97) from 2 preschools recruited between November, 2002 and December, 2004. Follow-up completed between November, 2005 and May, 2008. Main Outcome Measures Performance on measures of spoken language comprehension and expression. Results Children undergoing CI showed greater growth in spoken language performance (10.4;[95% confidence interval: 9.6–11.2] points/year in comprehension; 8.4;[7.8–9.0] in expression) than would be predicted by their pre-CI baseline scores (5.4;[4.1–6.7] comprehension; 5.8;[4.6–7.0] expression). Although mean scores were not restored to age-appropriate levels after 3 years, significantly greater annual rates of language acquisition were observed in children who were younger at CI (1.1;[0.5–1.7] points in comprehension per year younger; 1.0;[0.6–1.5] in expression), and in children with shorter histories of hearing deficit (0.8;[0.2,1.2] points in comprehension per year shorter; 0.6;[0.2–1.0] for expression). In multivariable analyses, greater residual hearing prior to CI, higher ratings of parent-child interactions, and higher SES associated with greater rates of growth in comprehension and expression. Conclusions The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their pre-implantation scores. However

  11. The Unified Phonetic Transcription for Teaching and Learning Chinese Languages

    ERIC Educational Resources Information Center

    Shieh, Jiann-Cherng

    2011-01-01

    In order to preserve distinctive cultures, people anxiously figure out writing systems of their languages as recording tools. Mandarin, Taiwanese and Hakka languages are three major and the most popular dialects of Han languages spoken in Chinese society. Their writing systems are all in Han characters. Various and independent phonetic…

  12. The Child's Path to Spoken Language.

    ERIC Educational Resources Information Center

    Locke, John L.

    A major synthesis of the latest research on early language acquisition, this book explores what gives infants the remarkable capacity to progress from babbling to meaningful sentences, and what inclines a child to speak. The book examines the neurological, perceptual, social, and linguistic aspects of language acquisition in young children, from…

  13. Spoken Language Derived Measures for Detecting Mild Cognitive Impairment

    PubMed Central

    Roark, Brian; Mitchell, Margaret; Hosom, John-Paul; Hollingshead, Kristy; Kaye, Jeffrey

    2011-01-01

    Spoken responses produced by subjects during neuropsychological exams can provide diagnostic markers beyond exam performance. In particular, characteristics of the spoken language itself can discriminate between subject groups. We present results on the utility of such markers in discriminating between healthy elderly subjects and subjects with mild cognitive impairment (MCI). Given the audio and transcript of a spoken narrative recall task, a range of markers are automatically derived. These markers include speech features such as pause frequency and duration, and many linguistic complexity measures. We examine measures calculated from manually annotated time alignments (of the transcript with the audio) and syntactic parse trees, as well as the same measures calculated from automatic (forced) time alignments and automatic parses. We show statistically significant differences between clinical subject groups for a number of measures. These differences are largely preserved with automation. We then present classification results, and demonstrate a statistically significant improvement in the area under the ROC curve (AUC) when using automatic spoken language derived features in addition to the neuropsychological test scores. Our results indicate that using multiple, complementary measures can aid in automatic detection of MCI. PMID:22199464

  14. On-Line Orthographic Influences on Spoken Language in a Semantic Task

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Dufau, Stephane; Ziegler, Johannes C.

    2009-01-01

    Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a…

  15. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    ERIC Educational Resources Information Center

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  16. How long-term memory and accentuation interact during spoken language comprehension.

    PubMed

    Li, Xiaoqing; Yang, Yufang

    2013-04-01

    Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words). PMID:23376769

  17. Accessing characters in spoken Chinese disyllables: An ERP study on the resolution of auditory ambiguity.

    PubMed

    Chen, Xuqian; Huang, Guoliang; Huang, Jian

    2016-01-01

    Chinese differs from most Indo-European languages in its phonological, lexical, and syntactic structures. One of its unique properties is the abundance of homophones at the monosyllabic/morphemic level, with the consequence that monosyllabic homophones are all ambiguous in speech perception. Two-morpheme Chinese words can be composed of two high homophone-density morphemes (HH words), two low homophone-density morphemes (LL words), or one high and one low homophone-density morphemes (LH or HL words). The assumption of a simple inhibitory homophone effect is called into question in the case of disyllabic spoken word recognition, in which the recognition of one morpheme is affected by semantic information given by the other. Event-related brain potentials (ERPs) were used to trace on-line competitions among morphemic homophones in accessing Chinese disyllables. Results showing significant differences in ERP amplitude when comparing LL and LH words, but not when comparing LL and HL words, suggested that the first morpheme cannot be accessed without feedback from the second morpheme. Most importantly, analyses of N400 amplitude among different densities showed a converse homophone effect in which LL words, rather than LH or HL words, triggered larger N400. These findings provide strong evidence of a dynamic integration system at work during spoken Chinese disyllable recognition. PMID:26589544

  18. Le Francais parle. Etudes sociolinguistiques (Spoken French. Sociolinguistic Studies). Current Inquiry into Languages and Linguistics 30.

    ERIC Educational Resources Information Center

    Thibault, Pierrette

    This volume contains twelve articles dealing with the French language as spoken in Quebec. The following topics are addressed: (1) language change and variation; (2) coordinating expressions in the French spoken in Montreal; (3) expressive language as source of language change; (4) the role of correction in conversation; (5) social change and…

  19. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  20. "Authenticity" in Language Testing: Evaluating Spoken Language Tests for International Teaching Assistants.

    ERIC Educational Resources Information Center

    Hoekje, Barbara; Linnell, Kimberly

    1994-01-01

    Bachman's framework of language testing and standard of authenticity for language testing instruments were used to evaluate three instruments--the SPEAK (Spoken Proficiency English Assessment Kit) test, OPI (Oral Proficiency Interview), and a performance test--as language tests for nonnative-English-speaking teaching assistants. (Contains 53…

  1. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    PubMed Central

    Geers, Ann E.; Nicholas, Johanna G.

    2013-01-01

    Purpose To determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12–38 months of age. Relative advantages of receiving a bilateral CI after age 4.5, better pre-CI aided hearing, and longer CI experience were also examined. Method Sixty children participated in a prospective longitudinal study of outcomes at 4.5 and 10.5 years of age. Twenty-nine children received a sequential second CI. Test scores were compared to normative samples of hearing age-mates and predictors of outcomes identified. Results Standard scores on language tests at 10.5 years of age remained significantly correlated with age of first cochlear implantation. Scores were not associated with receipt of a second, sequentially-acquired CI. Significantly higher scores were achieved for vocabulary as compared with overall language, a finding not evident when the children were tested at younger ages. Conclusion Age-appropriate spoken language skills continued to be more likely with younger AOI, even after an average of 8.6 years of additional CI use. Receipt of a second implant between ages 4–10 years and longer duration of device use did not provide significant added benefit. PMID:23275406

  2. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  3. The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing

    ERIC Educational Resources Information Center

    Gow, David W., Jr.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…

  4. Bimodal Bilinguals Co-activate Both Languages during Spoken Comprehension

    PubMed Central

    Shook, Anthony; Marian, Viorica

    2012-01-01

    Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals’ and English monolinguals’ eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing items than at phonologically unrelated items, and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension. PMID:22770677

  5. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    ERIC Educational Resources Information Center

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  6. Windows into Sensory Integration and Rates in Language Processing: Insights from Signed and Spoken Languages

    ERIC Educational Resources Information Center

    Hwang, So-One K.

    2011-01-01

    This dissertation explores the hypothesis that language processing proceeds in "windows" that correspond to representational units, where sensory signals are integrated according to time-scales that correspond to the rate of the input. To investigate universal mechanisms, a comparison of signed and spoken languages is necessary. Underlying the…

  7. Optimally efficient neural systems for processing spoken language.

    PubMed

    Zhuang, Jie; Tyler, Lorraine K; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D

    2014-04-01

    Cognitive models claim that spoken words are recognized by an optimally efficient sequential analysis process. Evidence for this is the finding that nonwords are recognized as soon as they deviate from all real words (Marslen-Wilson 1984), reflecting continuous evaluation of speech inputs against lexical representations. Here, we investigate the brain mechanisms supporting this core aspect of word recognition and examine the processes of competition and selection among multiple word candidates. Based on new behavioral support for optimal efficiency in lexical access from speech, a functional magnetic resonance imaging study showed that words with later nonword points generated increased activation in the left superior and middle temporal gyrus (Brodmann area [BA] 21/22), implicating these regions in dynamic sound-meaning mapping. We investigated competition and selection by manipulating the number of initially activated word candidates (competition) and their later drop-out rate (selection). Increased lexical competition enhanced activity in bilateral ventral inferior frontal gyrus (BA 47/45), while increased lexical selection demands activated bilateral dorsal inferior frontal gyrus (BA 44/45). These findings indicate functional differentiation of the fronto-temporal systems for processing spoken language, with left middle temporal gyrus (MTG) and superior temporal gyrus (STG) involved in mapping sounds to meaning, bilateral ventral inferior frontal gyrus (IFG) engaged in less constrained early competition processing, and bilateral dorsal IFG engaged in later, more fine-grained selection processes. PMID:23250955

  8. Scaling laws and model of words organization in spoken and written language

    NASA Astrophysics Data System (ADS)

    Bian, Chunhua; Lin, Ruokuang; Zhang, Xiaoyu; Ma, Qianli D. Y.; Ivanov, Plamen Ch.

    2016-01-01

    A broad range of complex physical and biological systems exhibits scaling laws. The human language is a complex system of words organization. Studies of written texts have revealed intriguing scaling laws that characterize the frequency of words occurrence, rank of words, and growth in the number of distinct words with text length. While studies have predominantly focused on the language system in its written form, such as books, little attention is given to the structure of spoken language. Here we investigate a database of spoken language transcripts and written texts, and we uncover that words organization in both spoken language and written texts exhibits scaling laws, although with different crossover regimes and scaling exponents. We propose a model that provides insight into words organization in spoken language and written texts, and successfully accounts for all scaling laws empirically observed in both language forms.

  9. Effects of orthographic consistency and homophone density on Chinese spoken word recognition.

    PubMed

    Chen, Wei-Fan; Chao, Pei-Chun; Chang, Ya-Ning; Hsu, Chun-Hsien; Lee, Chia-Ying

    2016-01-01

    Studies of alphabetic language have shown that orthographic knowledge influences phonological processing during spoken word recognition. This study utilized the Event-Related Potentials (ERPs) to differentiate two types of phonology-to-orthography (P-to-O) mapping consistencies in Chinese, namely homophone density and orthographic consistency. The ERP data revealed an orthographic consistency effect in the frontal-centrally distributed N400, and a homophone density effect in central-posteriorly distributed late positive component (LPC). Further source analyses using the standardized low-resolution electromagnetic tomography (sLORETA) demonstrated that the orthographic effect was not only localized in the frontal and temporal-parietal regions for phonological processing, but also in the posterior visual cortex for orthographic processing, while the homophone density effect was found in middle temporal gyrus for lexical-semantic selection, and in the temporal-occipital junction for orthographic processing. These results suggest that orthographic information not only shapes the nature of phonological representations, but may also be activated during on-line spoken word recognition. PMID:27174851

  10. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  11. Expressive Spoken Language Development in Deaf Children with Cochlear Implants Who Are Beginning Formal Education

    ERIC Educational Resources Information Center

    Inscoe, Jayne Ramirez; Odell, Amanda; Archbold, Susan; Nikolopoulos, Thomas

    2009-01-01

    This paper assesses the expressive spoken grammar skills of young deaf children using cochlear implants who are beginning formal education, compares it with that achieved by normally hearing children and considers possible implications for educational management. Spoken language grammar was assessed, three years after implantation, in 45 children…

  12. Top Languages Spoken by English Language Learners Nationally and by State. ELL Information Center Fact Sheet Series. No. 3

    ERIC Educational Resources Information Center

    Batalova, Jeanne; McHugh, Margie

    2010-01-01

    While English Language Learner (ELL) students in the United States speak more than 150 languages, Spanish is by far the most common home or first language, but is not the top language spoken by ELLs in every state. This fact sheet, based on analysis of the U.S. Census Bureau's 2009 American Community Survey, documents the top languages spoken…

  13. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    NASA Astrophysics Data System (ADS)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  14. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    PubMed

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success. PMID

  15. The employment of a spoken language computer applied to an air traffic control task.

    NASA Technical Reports Server (NTRS)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  16. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  17. Profound deafness and the acquisition of spoken language in children

    PubMed Central

    Vlastarakos, Petros V

    2012-01-01

    Profound congenital sensorineural hearing loss (SNHL) is not so infrequent, affecting 1 to 2 of every 1000 newborns in western countries. Nevertheless, universal hearing screening programs have not been widely applied, although such programs are already established for metabolic diseases. The acquisition of spoken language is a time-dependent process, and some form linguistic input should be present before the first 6 mo of life for a child to become linguistically competent. Therefore, profoundly deaf children should be detected early, and referred timely for the process of auditory rehabilitation to be initiated. Hearing assessment methods should reflect the behavioural audiogram in an accurate manner. Additional disabilities also need to be taken into account. Profound congenital SNHL is managed by a multidisciplinary team. Affected infants should be bilaterally fitted with hearing aids, no later than 3 mo after birth. They should be monitored until the first year of age. If they are not progressing linguistically, cochlear implantation can be considered after thorough preoperative assessment. Prelingually deaf children develop significant speech perception and production abilities, and speech intelligibility over time, following cochlear implantation. Age at intervention and oral communication, are the most important determinants of outcomes. Realistic parental expectations are also essential. Cochlear implant programs deserve the strong support of community members, professional bodies, and political authorities in order to be successful, and maximize the future earnings of pediatric cochlear implantation for human societies. PMID:25254164

  18. Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages

    PubMed Central

    Perniss, Pamela; Thompson, Robin L.; Vigliocco, Gabriella

    2010-01-01

    Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor, perceptual, and affective experience. PMID:21833282

  19. Chinese Language Guide. Level I.

    ERIC Educational Resources Information Center

    Bay Area Bilingual Education League, Berkeley, CA.

    This comprehensive Chinese language development guide for bilingual Chinese-English educators contains fifteen objectives along with related learning activities to be taught in the Chinese bilingual program. The guide emphasizes audio-lingual skill development and involves Chinese games, songs, foods, and holidays. (Author/AM)

  20. Recognition of Signed and Spoken Language: Different Sensory Inputs, the Same Segmentation Procedure

    ERIC Educational Resources Information Center

    Orfanidou, Eleni; Adam, Robert; Morgan, Gary; McQueen, James M.

    2010-01-01

    Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC),…

  1. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  2. Spoken Word Recognition of Chinese Words in Continuous Speech

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  3. Horizontal Flow of Semantic and Phonological Information in Chinese Spoken Sentence Production

    ERIC Educational Resources Information Center

    Yang, Jin-Chen; Yang, Yu-Fang

    2008-01-01

    A variant of the picture--word interference paradigm was used in three experiments to investigate the horizontal information flow of semantic and phonological information between nouns in spoken Mandarin Chinese sentences. Experiment 1 demonstrated that there is a semantic interference effect when the word in the second phrase (N3) and the first…

  4. A Critique of Mark D. Allen's "The Preservation of Verb Subcategory Knowledge in a Spoken Language Comprehension Deficit"

    ERIC Educational Resources Information Center

    Kemmerer, David

    2008-01-01

    Allen [Allen, M. (2005). "The preservation of verb subcategory knowledge in a spoken language comprehension deficit." "Brain and Language, 95", 255-264.] reports a single patient, WBN, who, during spoken language comprehension, is still able to access some of the syntactic properties of verbs despite being unable to access some of their semantic…

  5. Expository Writing in Chinese. International Studies, East Asian Language Texts, No. 5.

    ERIC Educational Resources Information Center

    McMahon, Keith; And Others

    This text is intended for use by advanced students of the Chinese language to learn to write at the college level in modern Chinese. The first ten lessons teach how to progress from the spoken structures to their contemporary written forms. Each lesson contains a text with a familiar form, notes on grammatical structures, and exercises. The text…

  6. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    ERIC Educational Resources Information Center

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  7. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    ERIC Educational Resources Information Center

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  8. Literacy in the Mainstream Inner-City School: Its Relationship to Spoken Language

    ERIC Educational Resources Information Center

    Myers, Lucy; Botting, Nicola

    2008-01-01

    This study describes the language and literacy skills of 11-year-olds attending a mainstream school in an area of social and economic disadvantage. The proportion of these young people experiencing difficulties in decoding and reading comprehension was identified and the relationship between spoken language skills and reading comprehension…

  9. Spoken Language Benefits of Extending Cochlear Implant Candidacy Below 12 Months of Age

    PubMed Central

    Nicholas, Johanna G.; Geers, Ann E.

    2013-01-01

    Objective To test the hypothesis that cochlear implantation surgery before 12 months of age yields better spoken language results than surgery between 12–18 months of age. Study Design Language testing administered to children at 4.5 years of age (± 2 months). Setting Schools, speech-language therapy offices, and cochlear implant (CI) centers in the US and Canada. Participants 69 children who received a cochlear implant between ages 6–18 months of age. All children were learning to communicate via listening and spoken language in English-speaking families. Main Outcome Measure Standard scores on receptive vocabulary, expressive and receptive language (includes grammar). Results Children with CI surgery at 6–11 months (N=27) achieved higher scores on all measures as compared to those with surgery at 12–18 months (N=42). Regression analysis revealed a linear relationship between age of implantation and language outcomes throughout the 6–18 month surgery-age range. Conclusion For children in intervention programs emphasizing listening and spoken language, cochlear implantation before 12 months of age appears to provide a significant advantage for spoken language achievement observed at 4.5 years of age. PMID:23478647

  10. Consequences of the Now-or-Never bottleneck for signed versus spoken languages.

    PubMed

    Emmorey, Karen

    2016-01-01

    Signed and spoken languages emerge, change, are acquired, and are processed under distinct perceptual, motor, and memory constraints. Therefore, the Now-or-Never bottleneck has different ramifications for these languages, which are highlighted in this commentary. The extent to which typological differences in linguistic structure can be traced to processing differences provides unique evidence for the claim that structure is processing. PMID:27562833

  11. A real-time spoken-language system for interactive problem-solving, combining linguistic and statistical technology for improved spoken language understanding

    NASA Astrophysics Data System (ADS)

    Moore, Robert C.; Cohen, Michael H.

    1993-09-01

    Under this effort, SRI has developed spoken-language technology for interactive problem solving, featuring real-time performance for up to several thousand word vocabularies, high semantic accuracy, habitability within the domain, and robustness to many sources of variability. Although the technology is suitable for many applications, efforts to date have focused on developing an Air Travel Information System (ATIS) prototype application. SRI's ATIS system has been evaluated in four ARPA benchmark evaluations, and has consistently been at or near the top in performance. These achievements are the result of SRI's technical progress in speech recognition, natural-language processing, and speech and natural-language integration.

  12. Diversity and Difference: Identity Issues of Chinese Heritage Language Learners from Dialect Backgrounds

    ERIC Educational Resources Information Center

    Wong, Ka F.; Xiao, Yang

    2010-01-01

    The goal of this study is to explore the identity constructions of Chinese heritage language students from dialect backgrounds. Their experiences in learning Mandarin as a "heritage" language--even though it is spoken neither at home nor in their immediate communities--highlight how identities are produced, processed, and practiced in our…

  13. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    PubMed

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. PMID:25820191

  14. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  15. Spoken Language Scores of Children Using Cochlear Implants Compared to Hearing Age-Mates at School Entry

    ERIC Educational Resources Information Center

    Geers, Ann E.; Moog, Jean S.; Biedenstein, Julia; Brenner, Christine; Hayes, Heather

    2009-01-01

    This study investigated three questions: Is it realistic to expect age-appropriate spoken language skills in children with cochlear implants (CIs) who received auditory-oral intervention during the preschool years? What characteristics predict successful spoken language development in this population? Are children with CIs more proficient in some…

  16. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    PubMed

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. PMID:24955519

  17. AG Bell Academy Certification Program for Listening and Spoken Language Specialists: Meeting a World-Wide Need for Qualified Professionals

    ERIC Educational Resources Information Center

    Goldberg, Donald M.; Dickson, Cheryl L.; Flexer, Carol

    2010-01-01

    This article discusses the AG Bell Academy for Listening and Spoken Language--an organization designed to build capacity of certified Listening and Spoken Language Specialists (LSLS) by defining and maintaining a set of professional standards for LSLS professionals and thereby addressing the global deficit of qualified LSLS. Definitions and…

  18. Preliminary findings of similarities and differences in the signed and spoken language of children with autism.

    PubMed

    Shield, Aaron

    2014-11-01

    Approximately 30% of hearing children with autism spectrum disorder (ASD) do not acquire expressive language, and those who do often show impairments related to their social deficits, using language instrumentally rather than socially, with a poor understanding of pragmatics and a tendency toward repetitive content. Linguistic abnormalities can be clinically useful as diagnostic markers of ASD and as targets for intervention. Studies have begun to document how ASD manifests in children who are deaf for whom signed languages are the primary means of communication. Though the underlying disorder is presumed to be the same in children who are deaf and children who hear, the structures of signed and spoken languages differ in key ways. This article describes similarities and differences between the signed and spoken language acquisition of children on the spectrum. Similarities include echolalia, pronoun avoidance, neologisms, and the existence of minimally verbal children. Possible areas of divergence include pronoun reversal, palm reversal, and facial grammar. PMID:25321855

  19. Setting the tone: an ERP investigation of the influences of phonological similarity on spoken word recognition in Mandarin Chinese.

    PubMed

    Malins, Jeffrey G; Joanisse, Marc F

    2012-07-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin (N=19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following nature: segmental (e.g., picture: hua1 'flower'; sound: hua4 'painting'); cohort (e.g., picture: hua1 'flower'; sound: hui1 'gray'); rhyme (e.g., picture: hua1 'flower'; sound: gua1 'melon'); tonal (e.g., picture: hua1 'flower'; sound: jing1 'whale'); unrelated (e.g., picture: hua1 'flower'; sound: lang2 'wolf'). Expectancy violations in the segmental condition showed an early-going modulation of components (starting at 250 ms post-stimulus onset), suggesting that listeners used tonal information to constrain word recognition as soon as it became available, just like they did with phonemic information in the cohort condition. However, effects were less persistent and more left-lateralized in the segmental than cohort condition, suggesting dissociable cognitive processes underlie access to tonal versus phonemic information. Cohort versus rhyme mismatches showed distinct patterns of modulation which were very similar to what has been observed in English, suggesting onsets and rimes are weighted similarly across the two languages. Last, we did not observe effects for whole-syllable mismatches above and beyond those for mismatches in individual components, suggesting the syllable does not merit a special status in Mandarin spoken word recognition. These results are discussed with respect to modifications needed for existing models to accommodate the tonal languages spoken by a large proportion of the world's speakers. PMID:22595659

  20. SPEAK MANDARIN, A BEGINNING TEXT IN SPOKEN CHINESE.

    ERIC Educational Resources Information Center

    FENN, HENRY C.; TEWKSBURY, M. GARDNER

    THIS TEXT IS A THOROUGH REVISION AND EXPANSION OF M. GARDNER TEWKSBURY'S "SPEAK CHINESE" OF 1948. THE 24 LESSONS OF THE EARLIER TEXT HAVE BEEN REGROUPED INTO 20 AND THE VOCABULARY NOW INCLUDES 850 ITEMS. WHEN USED WITH THE EXERCISES IN THE ACCOMPANYING "STUDENT'S WORKBOOK" THE TEXT IS SUITABLE FOR INTENSIVE COURSES AS WELL AS MORE CONVENTIONAL…

  1. Bimodal Bilinguals Co-Activate Both Languages during Spoken Comprehension

    ERIC Educational Resources Information Center

    Shook, Anthony; Marian, Viorica

    2012-01-01

    Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are…

  2. Contradictions in Chinese Language Reform.

    ERIC Educational Resources Information Center

    Cheng, Chin-Chuan

    The Draft of the Second Chinese Character Simplification Scheme proposed by the Chinese Committee on Language Reform, published in 1977, is discussed. The political history of the draft and current uncertainty about character simplification are examined, and a rigorous methodology for determining the success rate of a script reform is proposed.…

  3. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension

    PubMed Central

    Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  4. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension.

    PubMed

    Sekerina, Irina A; Campanelli, Luca; Van Dyke, Julie A

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  5. Factors influencing spoken language outcomes in children following early cochlear implantation.

    PubMed

    Geers, Ann E

    2006-01-01

    Development of spoken language is an objective of virtually all English-based educational programs for children who are deaf or hard of hearing. The primary goal of pediatric cochlear implantation is to provide critical speech information to the child's auditory system and brain to maximize the chances of developing spoken language. Cochlear implants have the potential to accomplish for profoundly deaf children what the electronic hearing aid made possible for hard of hearing children more than 50 years ago. Though the cochlear implant does not allow for hearing of the same quality as that experienced by persons without a hearing loss, it nonetheless has revolutionized the experience of spoken language acquisition for deaf children. However, the variability in performance remains quite high, with limited explanation as to the reasons for good and poor outcomes. Evaluating the success of cochlear implantation requires careful consideration of intervening variables, the characteristics of which are changing with advances in technology and clinical practice. Improvement in speech coding strategies, implantation at younger ages and in children with greater preimplant residual hearing, and rehabilitation focused on speech and auditory skill development are leading to a larger proportion of children approaching spoken language levels of hearing age-mates. PMID:16891836

  6. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    ERIC Educational Resources Information Center

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  7. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    ERIC Educational Resources Information Center

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  8. Bidialectal African American Adolescents' Beliefs about Spoken Language Expectations in English Classrooms

    ERIC Educational Resources Information Center

    Godley, Amanda; Escher, Allison

    2012-01-01

    This article describes the perspectives of bidialectal African American adolescents--adolescents who speak both African American Vernacular English (AAVE) and Standard English--on spoken language expectations in their English classes. Previous research has demonstrated that many teachers hold negative views of AAVE, but existing scholarship has…

  9. Using Unscripted Spoken Texts in the Teaching of Second Language Listening

    ERIC Educational Resources Information Center

    Wagner, Elvis

    2014-01-01

    Most spoken texts that are used in second language (L2) listening classroom activities are scripted texts, where the text is written, revised, polished, and then read aloud with artificially clear enunciation and slow rate of speech. This article explores the field's overreliance on these scripted texts, at the expense of including unscripted…

  10. On-Line Syntax: Thoughts on the Temporality of Spoken Language

    ERIC Educational Resources Information Center

    Auer, Peter

    2009-01-01

    One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…

  11. Comparing Spoken Language Treatments for Minimally Verbal Preschoolers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-01-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…

  12. Cross-Language Perception of Cantonese Vowels Spoken by Native and Non-Native Speakers

    ERIC Educational Resources Information Center

    So, Connie K.; Attina, Virginie

    2014-01-01

    This study examined the effect of native language background on listeners' perception of native and non-native vowels spoken by native (Hong Kong Cantonese) and non-native (Mandarin and Australian English) speakers. They completed discrimination and an identification task with and without visual cues in clear and noisy conditions. Results…

  13. Professional Training in Listening and Spoken Language--A Canadian Perspective

    ERIC Educational Resources Information Center

    Fitzpatrick, Elizabeth

    2010-01-01

    Several factors undoubtedly influenced the development of listening and spoken language options for children with hearing loss in Canada. The concept of providing auditory-based rehabilitation was popularized in Canada in the 1960s through the work of Drs. Daniel Ling and Agnes Ling in Montreal. The Lings founded the McGill University Project for…

  14. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    ERIC Educational Resources Information Center

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed checklists of…

  15. Beyond Rhyme or Reason: ERPs Reveal Task-Specific Activation of Orthography on Spoken Language

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Ziegler, Johannes C.

    2011-01-01

    Metaphonological tasks, such as rhyme judgment, have been the primary tool for the investigation of the effects of orthographic knowledge on spoken language. However, it has been recently argued that the orthography effect in rhyme judgment does not reflect the automatic activation of orthographic codes but rather stems from sophisticated response…

  16. Parental Reports of Spoken Language Skills in Children with Down Syndrome.

    ERIC Educational Resources Information Center

    Berglund, Eva; Eriksson, Marten; Johansson, Irene

    2001-01-01

    Spoken language in 330 children with Down syndrome (ages 1-5) and 336 normally developing children (ages 1,2) was compared. Growth trends, individual variation, sex differences, and performance on vocabulary, pragmatic, and grammar scales as well as maximum length of utterance were explored. Three- and four-year-old Down syndrome children…

  17. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    ERIC Educational Resources Information Center

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  18. The Contribution of the Inferior Parietal Cortex to Spoken Language Production

    ERIC Educational Resources Information Center

    Geranmayeh, Fatemeh; Brownsett, Sonia L. E.; Leech, Robert; Beckmann, Christian F.; Woodhead, Zoe; Wise, Richard J. S.

    2012-01-01

    This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with…

  19. Contribution of Implicit Sequence Learning to Spoken Language Processing: Some Preliminary Findings With Hearing Adults

    PubMed Central

    Conway, Christopher M.; Karpicke, Jennifer; Pisoni, David B.

    2013-01-01

    Spoken language consists of a complex, sequentially arrayed signal that contains patterns that can be described in terms of statistical relations among language units. Previous research has suggested that a domain-general ability to learn structured sequential patterns may underlie language acquisition. To test this prediction, we examined the extent to which implicit sequence learning of probabilistically structured patterns in hearing adults is correlated with a spoken sentence perception task under degraded listening conditions. Performance on the sentence perception task was found to be correlated with implicit sequence learning, but only when the sequences were composed of stimuli that were easy to encode verbally. Implicit learning of phonological sequences thus appears to underlie spoken language processing and may indicate a hitherto unexplored cognitive factor that may account for the enormous variability in language outcomes in deaf children with cochlear implants. The present findings highlight the importance of investigating individual differences in specific cognitive abilities as a way to understand and explain language in deaf learners and, in particular, variability in language outcomes following cochlear implantation. PMID:17548805

  20. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    PubMed Central

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  1. Retinoic Acid Signaling: A New Piece in the Spoken Language Puzzle.

    PubMed

    van Rhijn, Jon-Ruben; Vernes, Sonja C

    2015-01-01

    Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms involved in the correct development and function of these pathways will shed light on how humans can effortlessly and innately use spoken language and help to elucidate what goes wrong in speech-language disorders. FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that includes receptive and expressive language impairments. The neuro-molecular mechanisms controlled by FOXP2 will give insight into our capacity for speech-motor control, but are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid (RA) signaling and to modify the cellular response to RA, a key regulator of brain development. Here we explore evidence that FOXP2 and RA function in overlapping pathways. We summate evidence at molecular, cellular, and behavioral levels that suggest an interplay between FOXP2 and RA that may be important for fine motor control and speech-motor output. We propose RA signaling is an exciting new angle from which to investigate how neuro-genetic mechanisms can contribute to the (spoken) language ready brain. PMID:26635706

  2. Retinoic Acid Signaling: A New Piece in the Spoken Language Puzzle

    PubMed Central

    van Rhijn, Jon-Ruben; Vernes, Sonja C.

    2015-01-01

    Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech–motor output. Understanding the neuro-genetic mechanisms involved in the correct development and function of these pathways will shed light on how humans can effortlessly and innately use spoken language and help to elucidate what goes wrong in speech-language disorders. FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that includes receptive and expressive language impairments. The neuro-molecular mechanisms controlled by FOXP2 will give insight into our capacity for speech–motor control, but are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid (RA) signaling and to modify the cellular response to RA, a key regulator of brain development. Here we explore evidence that FOXP2 and RA function in overlapping pathways. We summate evidence at molecular, cellular, and behavioral levels that suggest an interplay between FOXP2 and RA that may be important for fine motor control and speech–motor output. We propose RA signaling is an exciting new angle from which to investigate how neuro-genetic mechanisms can contribute to the (spoken) language ready brain. PMID:26635706

  3. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    PubMed

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. PMID:25172388

  4. Overlapping Networks Engaged during Spoken Language Production and Its Cognitive Control

    PubMed Central

    Wise, Richard J.S.; Mehta, Amrish; Leech, Robert

    2014-01-01

    Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and “rest,” to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. PMID:24966373

  5. Spoken Language Processing in the Clarissa Procedure Browser

    NASA Technical Reports Server (NTRS)

    Rayner, M.; Hockey, B. A.; Renders, J.-M.; Chatzichrisafis, N.; Farrell, K.

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station, is as far as we know the first spoken dialog system in space. We describe the objectives of the Clarissa project and the system's architecture. In particular, we focus on three key problems: grammar-based speech recognition using the Regulus toolkit; methods for open mic speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations. We first describe the grammar-based recogniser we have build using Regulus, and report experiments where we compare it against a class N-gram recogniser trained off the same 3297 utterance dataset. We obtained a 15% relative improvement in WER and a 37% improvement in semantic error rate. The grammar-based recogniser moreover outperforms the class N-gram version for utterances of all lengths from 1 to 9 words inclusive. The central problem in building an open-mic speech recognition system is being able to distinguish between commands directed at the system, and other material (cross-talk), which should be rejected. Most spoken dialogue systems make the accept/reject decision by applying a threshold to the recognition confidence score. NASA shows how a simple and general method, based on standard approaches to document classification using Support Vector Machines, can give substantially better performance, and report experiments showing a relative reduction in the task-level error rate by about 25% compared to the baseline confidence threshold method. Finally, we describe a general side-effect free dialogue management architecture that we have implemented in Clarissa, which extends the "update semantics'' framework by including task as well as dialogue information in the information state. We show that this enables elegant treatments of several dialogue management problems, including corrections, confirmations, querying of the environment, and regression

  6. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2009-01-01

    Purpose: The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken…

  7. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children With Autism Spectrum Disorders

    PubMed Central

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2010-01-01

    Purpose The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child’s language level in order to establish a framework for comparing outcomes across intervention studies. Method The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. Results The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children’s spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age. PMID:19380608

  8. Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on English learners include: (1) Top 20 EL languages, as reported in states' top five lists: SY 2011-12; (2) States, including DC, with 80 percent or…

  9. Are Young Children with Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    ERIC Educational Resources Information Center

    Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…

  10. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language.

    PubMed

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H2 (15)O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of

  11. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    PubMed Central

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  12. Is There a Correlation between Languages Spoken and Intricate Movements of Tongue? A Comparative Study of Various Movements of Tongue among the Three Ethnic Races of Malaysia

    PubMed Central

    Nayak, Satheesha B; Awal, Mahfuzah Binti; Han, Chang Wei; Sivaram, Ganeshram; Vigneswaran, Thimesha; Choon, Tee Lian

    2016-01-01

    Introduction Tongue is mainly used for taste, chewing and in speech. In the present study, we focused on the secondary function of the tongue as to how it is used in phonetic pronunciation and linguistics and how these factors affect tongue movements. Objective To compare all possible movements of tongue among Malaysians belonging to three ethnic races and to find out if there is any link between languages spoken and ability to perform various tongue movements. Materials and Methods A total of 450 undergraduate medical students participated in the study. The students were chosen from three different races i.e. Malays, Chinese and Indians (Malaysian Indians). Data was collected from the students through a semi-structured interview following which each student was asked to demonstrate various tongue movements like protrusion, retraction, flattening, rolling, twisting, folding or any other special movements. The data obtained was first segregated and analysed according to gender, race and types and dialects of languages spoken. Results We found that most of the Malaysians were able to perform the basic movements of tongue like protrusion, flattening movements and very few were able to perform twisting and folding of the tongue. The ability to perform normal tongue movements and special movements like folding, twisting, rolling and others was higher among Indians when compared to Malay and Chinese. Conclusion Languages spoken by Indians involve detailed tongue rolling and folding in pronouncing certain words and may be the reason as to why Indians are more versatile with tongue movements as compared to the other two races amongst Malaysians. It may be a possibility that languages spoken by a person serves as a variable that increases their ability to perform special tongue movements besides influenced by the genetic makeup of a person. PMID:26894051

  13. Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders

    PubMed Central

    Weismer, Susan Ellis

    2015-01-01

    Purpose Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. Method The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. Results The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. Conclusion The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth. PMID:26254475

  14. Spoken word recognition by Latino children learning Spanish as their first language*

    PubMed Central

    HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE

    2010-01-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157

  15. The interface between spoken and written language: developmental disorders.

    PubMed

    Hulme, Charles; Snowling, Margaret J

    2014-01-01

    We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter-sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills). PMID:24324239

  16. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    PubMed

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. PMID:25858311

  17. International Curriculum for Chinese Language Education

    ERIC Educational Resources Information Center

    Scrimgeour, Andrew; Wilson, Philip

    2009-01-01

    The International Curriculum for Chinese Language Education (ICCLE) represents a significant initiative by the Office of Chinese Language Council International (Hanban) to organise and describe objectives and content for a standardised Chinese language curriculum around the world. It aims to provide a reference curriculum for planning, a framework…

  18. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    ERIC Educational Resources Information Center

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  19. Enriching English Language Spoken Outputs of Kindergartners in Thailand

    ERIC Educational Resources Information Center

    Wilang, Jeffrey Dawala; Sinwongsuwat, Kemtong

    2012-01-01

    This year is designated as Thailand's "English Speaking Year" with the aim of improving the communicative competence of Thais for the upcoming integration of the Association of Southeast Asian Nations (ASEAN) in 2015. The consistent low-level proficiency of the Thais in the English language has led to numerous curriculum revisions and…

  20. Reading, Writing, and Spoken Language Assessment Profiles for Students Who Are Deaf and Hard of Hearing Compared with Students with Language Learning Disabilities

    ERIC Educational Resources Information Center

    Nelson, Nickola Wolf; Crumpton, Teresa

    2015-01-01

    Working with students who are deaf or hard of hearing (DHH) can raise questions about whether language and literacy delays and difficulties are related directly to late and limited access to spoken language, to co-occurring language learning disabilities (LLD), or to both. A new Test of Integrated Language and Literacy Skills, which incorporates…

  1. Development of Lexical-Semantic Language System: N400 Priming Effect for Spoken Words in 18- and 24-Month Old Children

    ERIC Educational Resources Information Center

    Rama, Pia; Sirri, Louah; Serres, Josette

    2013-01-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…

  2. Setting the Tone: An ERP Investigation of the Influences of Phonological Similarity on Spoken Word Recognition in Mandarin Chinese

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2012-01-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following nature:…

  3. Teaching Pragmatic Awareness of Spoken Requests to Chinese EAP Learners in the UK: Is Explicit Instruction Effective?

    ERIC Educational Resources Information Center

    Halenko, Nicola; Jones, Christian

    2011-01-01

    The aim of this study is to evaluate the impact of explicit interventional treatment on developing pragmatic awareness and production of spoken requests in an EAP context (taken here to mean those studying/using English for academic purposes in the UK) with Chinese learners of English at a British higher education institution. The study employed…

  4. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    PubMed

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-01-01

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150-250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300-500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500-700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements. PMID:27180951

  5. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study

    PubMed Central

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-01-01

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150–250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300–500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500–700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements. PMID:27180951

  6. Symbolic gestures and spoken language are processed by a common neural system

    PubMed Central

    Xu, Jiang; Gannon, Patrick J.; Emmorey, Karen; Smith, Jason F.; Braun, Allen R.

    2009-01-01

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating “be quiet”), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects. PMID:19923436

  7. Symbolic gestures and spoken language are processed by a common neural system.

    PubMed

    Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R

    2009-12-01

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects. PMID:19923436

  8. The evolutionary history of genes involved in spoken and written language: beyond FOXP2.

    PubMed

    Mozzi, Alessandra; Forni, Diego; Clerici, Mario; Pozzoli, Uberto; Mascheretti, Sara; Guerini, Franca R; Riva, Stefania; Bresolin, Nereo; Cagliani, Rachele; Sironi, Manuela

    2016-01-01

    Humans possess a communication system based on spoken and written language. Other animals can learn vocalization by imitation, but this is not equivalent to human language. Many genes were described to be implicated in language impairment (LI) and developmental dyslexia (DD), but their evolutionary history has not been thoroughly analyzed. Herein we analyzed the evolution of ten genes involved in DD and LI. Results show that the evolutionary history of LI genes for mammals and aves was comparable in vocal-learner species and non-learners. For the human lineage, several sites showing evidence of positive selection were identified in KIAA0319 and were already present in Neanderthals and Denisovans, suggesting that any phenotypic change they entailed was shared with archaic hominins. Conversely, in FOXP2, ROBO1, ROBO2, and CNTNAP2 non-coding changes rose to high frequency after the separation from archaic hominins. These variants are promising candidates for association studies in LI and DD. PMID:26912479

  9. The evolutionary history of genes involved in spoken and written language: beyond FOXP2

    PubMed Central

    Mozzi, Alessandra; Forni, Diego; Clerici, Mario; Pozzoli, Uberto; Mascheretti, Sara; Guerini, Franca R.; Riva, Stefania; Bresolin, Nereo; Cagliani, Rachele; Sironi, Manuela

    2016-01-01

    Humans possess a communication system based on spoken and written language. Other animals can learn vocalization by imitation, but this is not equivalent to human language. Many genes were described to be implicated in language impairment (LI) and developmental dyslexia (DD), but their evolutionary history has not been thoroughly analyzed. Herein we analyzed the evolution of ten genes involved in DD and LI. Results show that the evolutionary history of LI genes for mammals and aves was comparable in vocal-learner species and non-learners. For the human lineage, several sites showing evidence of positive selection were identified in KIAA0319 and were already present in Neanderthals and Denisovans, suggesting that any phenotypic change they entailed was shared with archaic hominins. Conversely, in FOXP2, ROBO1, ROBO2, and CNTNAP2 non-coding changes rose to high frequency after the separation from archaic hominins. These variants are promising candidates for association studies in LI and DD. PMID:26912479

  10. Predictors of Early Reading Skill in 5-Year-Old Children with Hearing Loss Who Use Spoken Language

    ERIC Educational Resources Information Center

    Cupples, Linda; Ching, Teresa Y. C.; Crowe, Kathryn; Day, Julia; Seeto, Mark

    2014-01-01

    This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 five-Year-Old Children with prelingual hearing losses ranging from mild to profound who communicated primarily via spoken language. All participants were fitted…

  11. Spoken Language Comprehension of Phrases, Simple and Compound-Active Sentences in Non-Speaking Children with Severe Cerebral Palsy

    ERIC Educational Resources Information Center

    Geytenbeek, Joke J. M.; Heim, Margriet J. M.; Knol, Dirk L.; Vermeulen, R. Jeroen; Oostrom, Kim J.

    2015-01-01

    Background Children with severe cerebral palsy (CP) (i.e. "non-speaking children with severely limited mobility") are restricted in many domains that are important to the acquisition of language. Aims To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. Methods & Procedures…

  12. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    PubMed Central

    Hirschmüller, Sarah; Egloff, Boris

    2016-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135

  13. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    PubMed

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135

  14. Copula filtration of spoken language signals on the background of acoustic noise

    NASA Astrophysics Data System (ADS)

    Kolchenko, Lilia V.; Sinitsyn, Rustem B.

    2010-09-01

    This paper is devoted to the filtration of acoustic signals on the background of acoustic noise. Signal filtering is done with the help of a nonlinear analogue of a correlation function - a copula. The copula is estimated with the help of kernel estimates of the cumulative distribution function. At the second stage we suggest a new procedure of adaptive filtering. The silence and sound intervals are detected before the filtration with the help of nonparametric algorithm. The results are confirmed by experimental processing of spoken language signals.

  15. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    PubMed

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. PMID:23081791

  16. A Spoken-Language Intervention for School-Aged Boys With Fragile X Syndrome.

    PubMed

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-05-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived language-support strategies. All sessions were implemented through distance videoteleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data were collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies, and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214

  17. The effect of written text on comprehension of spoken English as a foreign language.

    PubMed

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas. PMID:17650920

  18. Spoken Lebanese.

    ERIC Educational Resources Information Center

    Feghali, Maksoud N.

    This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…

  19. Saving Chinese-Language Education in Singapore

    ERIC Educational Resources Information Center

    Lee, Cher Leng

    2012-01-01

    Three-quarters of Singapore's population consists of ethnic Chinese, and yet, learning Chinese (Mandarin) has been a headache for many Singapore students. Recently, many scholars have argued that the rhetoric of language planning for Mandarin Chinese should be shifted from emphasizing its cultural value to stressing its economic value since…

  20. Effects of Early Auditory Experience on the Spoken Language of Deaf Children at 3 Years of Age

    PubMed Central

    Nicholas, Johanna Grant; Geers, Ann E.

    2010-01-01

    Objective: By age three, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe-profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated three-year-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 months of age and who received a cochlear implant between 12 and 38 months of age. The purpose of the analysis was to examine the effects of age, duration and type of early auditory experience on spoken language competence at age 3.5. Design: The spoken language skills of 76 children who had used a cochlear implant for at least 7 months were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included: presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting over 30 days. Results: Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant PTA threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/ age at implantation (the last two variables were practically identical since all children were tested

  1. Written Language Is as Natural as Spoken Language: A Biolinguistic Perspective

    ERIC Educational Resources Information Center

    Aaron, P. G.; Joshi, R. Malatesha

    2006-01-01

    A commonly held belief is that language is an aspect of the biological system since the capacity to acquire language is innate and evolved along Darwinian lines. Written language, on the other hand, is thought to be an artifact and a surrogate of speech; it is, therefore, neither natural nor biological. This disparaging view of written language,…

  2. Building Language Blocks in L2 Japanese: Chunk Learning and the Development of Complexity and Fluency in Spoken Production

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2008-01-01

    This pilot study examined the development of complexity and fluency of second language (L2) spoken production among L2 learners who received extensive practice on grammatical chunks as constituent units of discourse. Twenty-two students enrolled in an elementary Japanese course at a U.S. university received classroom instruction on 40 grammatical…

  3. Parent and Teacher Perceptions of Transitioning Students from a Listening and Spoken Language School to the General Education Setting

    ERIC Educational Resources Information Center

    Rugg, Natalie; Donne, Vicki

    2011-01-01

    The present study examines the perception of parents and teachers towards the transition process and preparedness of students who are deaf or hard of hearing from a listening and spoken language school in the Northeastern United States to a general education setting in their home school districts. The study uses a mixed methods design with…

  4. Influence of Spoken Language on the Initial Acquisition of Reading/Writing: Critical Analysis of Verbal Deficit Theory

    ERIC Educational Resources Information Center

    Ramos-Sanchez, Jose Luis; Cuadrado-Gordillo, Isabel

    2004-01-01

    This article presents the results of a quasi-experimental study of whether there exists a causal relationship between spoken language and the initial learning of reading/writing. The subjects were two matched samples each of 24 preschool pupils (boys and girls), controlling for certain relevant external variables. It was found that there was no…

  5. Does It Really Matter whether Students' Contributions Are Spoken versus Typed in an Intelligent Tutoring System with Natural Language?

    ERIC Educational Resources Information Center

    D'Mello, Sidney K.; Dowell, Nia; Graesser, Arthur

    2011-01-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning, whereas the "text…

  6. Brief Report: The Effects of Typed and Spoken Modality Combinations on the Language Performance of Adults with Autism.

    ERIC Educational Resources Information Center

    Forsey, Janice; And Others

    1996-01-01

    A study of five adult males with autism investigated which combination of input/output modalities (typed or spoken) enhanced the syntactic, semantic, and/or pragmatic performance of individuals with autism when engaging in conversations with a normal language adult. Results found that typed communications facilitated the use of longer utterances.…

  7. Semantic Relations Cause Interference in Spoken Language Comprehension When Using Repeated Definite References, Not Pronouns.

    PubMed

    Peters, Sara A; Boiteau, Timothy W; Almor, Amit

    2016-01-01

    The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis. PMID:26973552

  8. Semantic Relations Cause Interference in Spoken Language Comprehension When Using Repeated Definite References, Not Pronouns

    PubMed Central

    Peters, Sara A.; Boiteau, Timothy W.; Almor, Amit

    2016-01-01

    The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis. PMID:26973552

  9. The relationship between language spoken and smoking among Hispanic-Latino youth in New York City.

    PubMed Central

    Dusenbery, L; Epstein, J A; Botvin, G J; Diaz, T

    1994-01-01

    This study was designed to examine the relationship between language spoken and smoking (at least once a month) among New York City Hispanic-Latino adolescents, using a large sample of specific Hispanic-Latino subgroups (Puerto Rican, Dominican, Colombian, and Ecuadorian youth) and controlling for social and environmental factors. The sample included 3,129 Hispanic-Latino students in 47 New York City public and parochial schools. Of the total sample, 43 percent were Puerto Rican, 20 percent Dominican, 7 percent Colombian, and 7 percent Ecuadorian. The students completed questionnaires that were designed to assess social and environmental influences on their smoking and determine what languages they spoke (English and Spanish) with parents and friends. Self-reported smoking data were collected by means of the bogus pipeline to enhance the veracity of self-reports. In the logistic regression model, including background, social influence, and language use variables, 101 students were smokers. Logistic regression analysis indicated that being bicultural (speaking both English and Spanish) at home and with friends appeared to increase the odds of currently smoking. Separate logistic regression analyses for girls and boys revealed that being bicultural at home increased the odds of currently smoking for boys but not girls. Results are discussed in terms of their implications for prevention. PMID:8190866

  10. Taiwan's Chinese Language Development and the Creation of Language Teaching Analysis

    ERIC Educational Resources Information Center

    Tsai, Cheng-Hui; Wang, Chuan Po

    2015-01-01

    Chinese Teaching in Taiwan in recent years in response to the international trend of development, making at all levels of Chinese language teaching in full swing, for the recent boom in Chinese language teaching, many overseas Chinese language learning for children also had a passion while actively learning Chinese language, and even many overseas…

  11. Child-centered collaborative conversations that maximize listening and spoken language development for children with hearing loss.

    PubMed

    Garber, Ashley S; Nevins, Mary Ellen

    2012-11-01

    In the period that begins with early intervention enrollment and ends with the termination of formal education, speech-language pathologists (SLPs) will have numerous opportunities to form professional relationships that can enhance any child's listening and spoken language accomplishments. SLPs who initiate and/or nurture these relationships are urged to place the needs of the child as the core value that drives decision making. Addressing this priority will allow for the collaborative conversations necessary to develop an effective intervention plan at any level. For the SLP, the purpose of these collaborative conversations will be twofold: identifying the functional communication needs of the child with hearing loss across settings and sharing practical strategies to encourage listening and spoken language skill development. Auditory first, wait time, sabotage, and thinking turns are offered as four techniques easily implemented by all service providers to support the child with hearing loss in all educational settings. PMID:23081786

  12. Communicative Language Teaching in the Chinese Environment

    ERIC Educational Resources Information Center

    Hu, Wei

    2010-01-01

    In order to explore effective ways to develop Chinese English learners' communicative competence, this study first briefly reviews the advantages of communicative language teaching (CLT) method which widely practiced in the Western countries and analyzes in details its obstacles in Chinese classroom context. Then it offers guidelines for…

  13. Grammatical Processing of Spoken Language in Child and Adult Language Learners

    ERIC Educational Resources Information Center

    Felser, Claudia; Clahsen, Harald

    2009-01-01

    This article presents a selective overview of studies that have investigated auditory language processing in children and late second-language (L2) learners using online methods such as event-related potentials (ERPs), eye-movement monitoring, or the cross-modal priming paradigm. Two grammatical phenomena are examined in detail, children's and…

  14. Are Young Children With Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    PubMed Central

    McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation. Method We analyzed archival data collected from the parents of 36 children who received cochlear implantation (20 unilateral, 16 bilateral) before 24 months of age. The parents reported their children's word productions 12 months after implantation using the MacArthur Communicative Development Inventories: Words and Sentences (Fenson et al., 1993). We computed the number of words, out of 292 possible monosyllabic nouns, verbs, and adjectives, that each child was reported to say and calculated the average phonotactic probability, neighborhood density, and word frequency of the reported words. Results Spoken vocabulary size positively correlated with average phonotactic probability and negatively correlated with average neighborhood density, but only in children with bilateral CIs. Conclusion At 12 months postimplantation, children with bilateral CIs demonstrate sensitivity to statistical characteristics of words in the ambient spoken language akin to that reported for children with normal hearing during the early stages of lexical development. Children with unilateral CIs do not. PMID:25677929

  15. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners

  16. Combining Speech Recognition/Natural Language Processing with 3D Online Learning Environments to Create Distributed Authentic and Situated Spoken Language Learning

    ERIC Educational Resources Information Center

    Jones, Greg; Squires, Todd; Hicks, Jeramie

    2008-01-01

    This article will describe research done at the National Institute of Multimedia in Education, Japan and the University of North Texas on the creation of a distributed Internet-based spoken language learning system that would provide more interactive and motivating learning than current multimedia and audiotape-based systems. The project combined…

  17. Neural processing of spoken words in specific language impairment and dyslexia.

    PubMed

    Helenius, Päivi; Parviainen, Tiina; Paetau, Ritva; Salmelin, Riitta

    2009-07-01

    Young adults with a history of specific language impairment (SLI) differ from reading-impaired (dyslexic) individuals in terms of limited vocabulary and poor verbal short-term memory. Phonological short-term memory has been shown to play a significant role in learning new words. We investigated the neural signatures of auditory word recognition and word repetition in young adults with SLI, dyslexia and normal language development using magnetoencephalography. The stimuli were 7-8 letter spoken real words and pseudo-words. They evoked a transient peak at 100 ms (N100m) followed by longer-lasting activation peaking around 400 ms (N400m) in the left and right superior temporal cortex. Both word repetition (first vs. immediately following second presentation) and lexicality (words vs. pseudowords) modulated the N400m response. An effect of lexicality was detected about 400 ms onwards as activation culminated for words but continued for pseudo-words. This effect was more pronounced in the left than right hemisphere in the control subjects. The left hemisphere lexicality effect was also present in the dyslexic adults, but it was non-significant in the subjects with SLI, possibly reflecting their limited vocabulary. The N400m activation between 200 and 700 ms was attenuated by the immediate repetition of words and pseudo-words in both hemispheres. In SLI adults the repetition effect evaluated at 200-400 ms was abnormally weak. This finding suggests impaired short-term maintenance of linguistic activation that underlies word recognition. Furthermore, the size of the repetition effect decreased from control subjects through dyslexics to SLIs, i.e. when advancing from milder to more severe language impairment. The unusually rapid decay of speech-evoked activation could have a detrimental role on vocabulary growth in children with SLI. PMID:19498087

  18. Is there an effect of dysphonic teachers' voices on children's processing of spoken language?

    PubMed

    Rogerson, Jemma; Dodd, Barbara

    2005-03-01

    There is a vast body of literature on the causes, prevalence, implications, and issues of vocal dysfunction in teachers. However, the educational effect of teacher vocal impairment is largely unknown. The purpose of this study was to investigate the effect of impaired voice quality on children's processing of spoken language. One hundred and seven children (age range, 9.2 to 10.6, mean 9.8, SD 3.76 months) listened to three video passages, one read in a control voice, one in a mild dysphonic voice, and one in a severe dysphonic voice. After each video passage, children were asked to answer six questions, with multiple-choice answers. The results indicated that children's perceptions of speech across the three voice qualities differed, regardless of gender, IQ, and school attended. Performance in the control voice passages was better than performance in the mild and severe dysphonic voice passages. No difference was found between performance in the mild and severe dysphonic voice passages, highlighting that any form of vocal impairment is detrimental to children's speech processing and is therefore likely to have a negative educational effect. These findings, in light of the high rate of vocal dysfunction in teachers, further support the implementation of specific voice care education for those in the teaching profession. PMID:15766849

  19. Implicit learning of nonadjacent phonotactic dependencies in the perception of spoken language

    NASA Astrophysics Data System (ADS)

    McLennan, Conor T.; Luce, Paul A.

    2001-05-01

    We investigated the learning of nonadjacent phonotactic dependencies in adults. Following previous research examining learning of dependencies at a grammatical level (Gomez, 2002), we manipulated the co-occurrence of nonadjacent phonological segments within a spoken syllable. Each listener was exposed to consonant-vowel-consonant nonword stimuli produced by one of two phonological grammars. Both languages contained the same adjacent dependencies between the initial consonant-vowel and final vowel-consonant sequences but differed on the co-occurrences of initial and final consonants. The number of possible types of vowels that intervened between the initial and final consonants was also manipulated. Listeners learning of nonadjacent segmental dependencies were evaluated in a speeded recognition task in which they heard (1) old nonwords on which they had been trained, (2) new nonwords generated by the grammar on which they had been trained, and (3) new nonwords generated by the grammar on which they had not been trained. The results provide evidence for listener's sensitivity to nonadjacent dependencies. However, this sensitivity is manifested as an inhibitory competition effect rather than a facilitative effect on pattern processing. [Research supported by Research Grant No. R01 DC 0265802 from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health.

  20. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners

  1. Research on Spoken Dialogue Systems

    NASA Technical Reports Server (NTRS)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  2. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates

    PubMed Central

    Petkov, Christopher I.; Jarvis, Erich D.

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species. PMID:22912615

  3. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals

    PubMed Central

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000

  4. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm. PMID:26993126

  5. On Chinese Loan Words from English Language

    ERIC Educational Resources Information Center

    Yan, Yun; Deng, Tianbai

    2009-01-01

    In the recent twenty years, with China's reform and opening policy to the outside world, there is a sharp increase in English loan words in Chinese. On the one hand, it demonstrates that China's soft power has been booming up. But on the other hand, some language pollution in the meanwhile is caused by non-standard use of loan words in Chinese.…

  6. EEG decoding of spoken words in bilingual listeners: from words to language invariant semantic-conceptual representations

    PubMed Central

    Correia, João M.; Jansma, Bernadette; Hausfeld, Lars; Kikkert, Sanne; Bonte, Milene

    2015-01-01

    Spoken word recognition and production require fast transformations between acoustic, phonological, and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g., “paard”–“horse”). Multivariate pattern analysis (MVPA) was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination) and generalize meaning across two languages (across-language generalization). Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50–620 ms) after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550–600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in comprehension and

  7. Reply to David Kemmerer's "A Critique of Mark D. Allen's "The Preservation of Verb Subcategory Knowledge in a Spoken Language Comprehension Deficit""

    ERIC Educational Resources Information Center

    Allen, Mark D.; Owens, Tyler E.

    2008-01-01

    Allen [Allen, M. D. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. "Brain and Language, "95, 255-264] presents evidence from a single patient, WBN, to motivate a theory of lexical processing and representation in which syntactic information may be encoded and retrieved independently of semantic…

  8. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2012-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…

  9. Chunk Learning and the Development of Spoken Discourse in a Japanese as a Foreign Language Classroom

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2007-01-01

    This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…

  10. Are Phonological Representations of Printed and Spoken Language Isomorphic? Evidence from the Restrictions on Unattested Onsets

    ERIC Educational Resources Information Center

    Berent, Iris

    2008-01-01

    Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether…

  11. Fremdsprachenunterricht als Kommunikationsprozess (Foreign Language Teaching as a Communicative Process). Language Centre News, No. 1. Focus on Spoken Language.

    ERIC Educational Resources Information Center

    Butzkamm, Wolfgang

    Teaching, as a communicative process, ranges between purely message-oriented communication (the goal) and purely language-oriented communication (a means). Classroom discourse ("Close the window", etc.) is useful as a drill but is also message-oriented. Skill in message-oriented communication is acquired only through practice in this kind of…

  12. A pilot study of telepractice delivery for teaching listening and spoken language to children with hearing loss.

    PubMed

    Constantinescu, Gabriella; Waite, Monique; Dornan, Dimity; Rushbrooke, Emma; Brown, Jackie; McGovern, Jane; Ryan, Michelle; Hill, Anne

    2014-04-01

    Telemedicine ("telepractice") allows improved access to specialised early intervention services such as Auditory-Verbal Therapy (AVT) for children with hearing loss. We investigated the effectiveness of a tele-AVT programme (eAVT) in the spoken language development of a group of young children with hearing loss. In a retrospective study we compared the language outcomes of children with bilateral hearing loss receiving eAVT with a control group who received therapy In Person. Seven children in each group (mean age 2.4 years) were matched on pre-amplification hearing level for the better hearing ear, age at optimal amplification and enrolment in the AVT programme. The eAVT sessions were conducted via Skype. Results on the Preschool Language Scale-4 were compared at 2 years post optimal amplification. There were no significant differences in language scores between the two groups. Language scores for the children in the eAVT group were within the normal range for children with normal hearing. The results suggest that early intervention AVT via telepractice may be as effective as delivery In Person for children with hearing loss. PMID:24643949

  13. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    PubMed Central

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  14. Developing Accuracy and Fluency in Spoken English of Chinese EFL Learners

    ERIC Educational Resources Information Center

    Wang, Zhiqin

    2014-01-01

    Chinese EFL learners may have difficulty in speaking fluent and accurate English, for their speaking competence are likely to be influenced by cognitive, linguistic and affective factors. With the aim to enhance those learners' oral proficiency, this paper first discusses three effective models of teaching English speaking, and then proposes a…

  15. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    ERIC Educational Resources Information Center

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  16. Self-Ratings of Spoken Language Dominance: A Multilingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2012-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…

  17. Parents Sharing Books with Young Deaf Children in Spoken English and in BSL: The Common and Diverse Features of Different Language Settings

    ERIC Educational Resources Information Center

    Swanwick, Ruth; Watson, Linda

    2007-01-01

    Twelve parents of young deaf children were recorded sharing books with their deaf child--six from families using British Sign Language (BSL) and six from families using spoken English. Although all families were engaged in sharing books with their deaf child and concerned to promote literacy development, they approached the task differently and…

  18. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    ERIC Educational Resources Information Center

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  19. Simulating Language-specific and Language-general Effects in a Statistical Learning Model of Chinese Reading

    PubMed Central

    Yang, Jianfeng; McCandliss, Bruce D.; Shu, Hua; Zevin, Jason D.

    2009-01-01

    Many theoretical models of reading assume that different writing systems require different processing assumptions. For example, it is often claimed that print-to-sound mappings in Chinese are not represented or processed sub-lexically. We present a connectionist model that learns the print to sound mappings of Chinese characters using the same functional architecture and learning rules that have been applied to English. The model predicts an interaction between item frequency and print-to-sound consistency analogous to what has been found for English, as well as a language-specific regularity effect particular to Chinese. Behavioral naming experiments using the same test items as the model confirmed these predictions. Corpus properties and the analyses of internal representations that evolved over training revealed that the model was able to capitalize on information in “phonetic components” – sub-lexical structures of variable size that convey probabilistic information about pronunciation. The results suggest that adult reading performance across very different writing systems may be explained as the result of applying the same learning mechanisms to the particular input statistics of writing systems shaped by both culture and the exigencies of communicating spoken language in a visual medium. PMID:20161189

  20. Will They Catch Up? The Role of Age at Cochlear Implantation in the Spoken Language Development of Children with Severe to Profound Hearing Loss

    ERIC Educational Resources Information Center

    Nicholas, Johanna Grant; Geers, Ann E.

    2007-01-01

    Purpose: The authors examined the benefits of younger cochlear implantation, longer cochlear implant use, and greater pre-implant aided hearing to spoken language at 3.5 and 4.5 years of age. Method: Language samples were obtained at ages 3.5 and 4.5 years from 76 children who received an implant by their 3rd birthday. Hierarchical linear modeling…

  1. Language Attitudes and Heritage Language Maintenance among Chinese Immigrant Families in the USA

    ERIC Educational Resources Information Center

    Zhang, Donghui; Slaughter-Defoe, Diana T.

    2009-01-01

    This qualitative study investigates attitudes toward heritage language (HL) maintenance among Chinese immigrant parents and their second-generation children. Specific attention is given to exploring (1) what attitudes are held by the Chinese parents and children toward Chinese language maintenance in the USA, (2) what efforts are engaged in by the…

  2. English Language Ideologies in the Chinese Foreign Language Education Policies: A World-System Perspective

    ERIC Educational Resources Information Center

    Pan, Lin

    2011-01-01

    This paper investigates the Chinese state's English language ideologies as reflected in official Chinese foreign language education policies (FLEP). It contends that the Chinese FLEP not only indicate a way by which the state gains consent, maintains cultural governance, and exerts hegemony internally, but also shows the traces of the combined…

  3. Chinese Language Education in Europe: The Confucius Institutes

    ERIC Educational Resources Information Center

    Starr, Don

    2009-01-01

    This article explores the background to the Chinese government's decision to embark on a programme of promoting the study of Chinese language and culture overseas. This includes the impact of Joseph Nye's concept of "soft power" in China, ownership of the national language, the Confucius connection, and how these factors interact with political…

  4. Teacher Education Curriculum for Teaching Chinese as a Foreign Language

    ERIC Educational Resources Information Center

    Attaran, Mohammad; Yishuai, Hu

    2015-01-01

    The worldwide growing demand of CFL (Chinese as a Foreign Language) teachers has many implications for both curriculum development and teacher education. Much evidence has shown more in-depth research is needed in the field of teaching Chinese as a foreign language (Tsung & Cruickshank, 2011). Studying in-service teachers' experience in…

  5. Research among Learners of Chinese as a Foreign Language. Chinese Language Teachers Association Monograph Series. Volume IV

    ERIC Educational Resources Information Center

    Everson, Michael E., Ed.; Shen, Helen H., Ed.

    2010-01-01

    Cutting-edge in its approach and international in its authorship, this fourth monograph in a series sponsored by the Chinese Language Teachers Association features eight research studies that explore a variety of themes, topics, and perspectives important to a variety of stakeholders in the Chinese language learning community. Employing a wide…

  6. Validation of the "Chinese Language Classroom Learning Environment Inventory" for Investigating the Nature of Chinese Language Classrooms

    ERIC Educational Resources Information Center

    Lian, Chua Siew; Wong, Angela F. L.; Der-Thanq, Victor Chen

    2006-01-01

    The Chinese Language Classroom Environment Inventory (CLCEI) is a bilingual instrument developed for use in measuring students' and teachers' perceptions toward their Chinese Language classroom learning environments in Singapore secondary schools. The English version of the CLCEI was customised from the English version of the "What is happening in…

  7. Language Ability and Verbal and Nonverbal Executive Functioning in Deaf Students Communicating in Spoken English

    ERIC Educational Resources Information Center

    Remine, Maria D.; Care, Esther; Brown, P. Margaret

    2008-01-01

    The internal use of language during problem solving is considered to play a key role in executive functioning. This role provides a means for self-reflection and self-questioning during the formation of rules and plans and a capacity to control and monitor behavior during problem-solving activity. Given that increasingly sophisticated language is…

  8. Do Spoken Nonword and Sentence Repetition Tasks Discriminate Language Impairment in Children with an ASD?

    ERIC Educational Resources Information Center

    Harper-Hill, Keely; Copland, David; Arnott, Wendy

    2013-01-01

    The primary aim of this paper was to investigate heterogeneity in language abilities of children with a confirmed diagnosis of an ASD (N = 20) and children with typical development (TD; N = 15). Group comparisons revealed no differences between ASD and TD participants on standard clinical assessments of language ability, reading ability or…

  9. The Sound of Motion in Spoken Language: Visual Information Conveyed by Acoustic Properties of Speech

    ERIC Educational Resources Information Center

    Shintel, Hadas; Nusbaum, Howard C.

    2007-01-01

    Language is generally viewed as conveying information through symbols whose form is arbitrarily related to their meaning. This arbitrary relation is often assumed to also characterize the mental representations underlying language comprehension. We explore the idea that visuo-spatial information can be analogically conveyed through acoustic…

  10. Informativeness of the Spoken Narratives of Younger and Older Adolescents with Specific Language Impairment and Their Counterparts with Normal Language

    ERIC Educational Resources Information Center

    Reed, Vicki A.; Patchell, Frederick C.; Coggins, Truman E.; Hand, Linda S.

    2007-01-01

    A large body of literature describing the narrative skills of young children with and without language impairments exists. However, there has been only limited study of the informativeness of narratives of adolescents with normally developing language (NL) and those of adolescents with specific language impairment (SLI), even though narratives…

  11. Bilateral Versus Unilateral Cochlear Implants in Children: A Study of Spoken Language Outcomes

    PubMed Central

    Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of

  12. Spared and Impaired Spoken Discourse Processing in Schizophrenia: Effects of Local and Global Language Context

    PubMed Central

    Boudewyn, Megan A.; Long, Debra L.; Luck, Steve J.; Kring, Ann M.; Ragland, J. Daniel; Ranganath, Charan; Lesh, Tyler; Niendam, Tara; Solomon, Marjorie; Mangun, George R.; Carter, Cameron S.

    2013-01-01

    Individuals with schizophrenia are impaired in a broad range of cognitive functions, including impairments in the controlled maintenance of context-relevant information. In this study, we used ERPs in human subjects to examine whether impairments in the controlled maintenance of spoken discourse context in schizophrenia lead to overreliance on local associations among the meanings of individual words. Healthy controls (n = 22) and patients (n = 22) listened to short stories in which we manipulated global discourse congruence and local priming. The target word in the last sentence of each story was globally congruent or incongruent and locally associated or unassociated. ERP local association effects did not significantly differ between control participants and schizophrenia patients. However, in contrast to controls, patients only showed effects of discourse congruence when targets were primed by a word in the local context. When patients had to use discourse context in the absence of local priming, they showed impaired brain responses to the target. Our findings indicate that schizophrenia patients are impaired during discourse comprehension when demands on controlled maintenance of context are high. We further found that ERP measures of increased reliance on local priming predicted reduced social functioning, suggesting that alterations in the neural mechanisms underlying discourse comprehension have functional consequences in the illness. PMID:24068824

  13. Spared and impaired spoken discourse processing in schizophrenia: effects of local and global language context.

    PubMed

    Swaab, Tamara Y; Boudewyn, Megan A; Long, Debra L; Luck, Steve J; Kring, Ann M; Ragland, J Daniel; Ranganath, Charan; Lesh, Tyler; Niendam, Tara; Solomon, Marjorie; Mangun, George R; Carter, Cameron S

    2013-09-25

    Individuals with schizophrenia are impaired in a broad range of cognitive functions, including impairments in the controlled maintenance of context-relevant information. In this study, we used ERPs in human subjects to examine whether impairments in the controlled maintenance of spoken discourse context in schizophrenia lead to overreliance on local associations among the meanings of individual words. Healthy controls (n = 22) and patients (n = 22) listened to short stories in which we manipulated global discourse congruence and local priming. The target word in the last sentence of each story was globally congruent or incongruent and locally associated or unassociated. ERP local association effects did not significantly differ between control participants and schizophrenia patients. However, in contrast to controls, patients only showed effects of discourse congruence when targets were primed by a word in the local context. When patients had to use discourse context in the absence of local priming, they showed impaired brain responses to the target. Our findings indicate that schizophrenia patients are impaired during discourse comprehension when demands on controlled maintenance of context are high. We further found that ERP measures of increased reliance on local priming predicted reduced social functioning, suggesting that alterations in the neural mechanisms underlying discourse comprehension have functional consequences in the illness. PMID:24068824

  14. Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping

    PubMed Central

    Mirman, Daniel; Chen, Qi; Zhang, Yongsheng; Wang, Ze; Faseyitan, Olufunsho K.; Coslett, H. Branch; Schwartz, Myrna F.

    2015-01-01

    Studies of patients with acquired cognitive deficits following brain damage and studies using contemporary neuroimaging techniques form two distinct streams of research on the neural basis of cognition. In this study, we combine high-quality structural neuroimaging analysis techniques and extensive behavioral assessment of patients with persistent acquired language deficits to study the neural basis of language. Our results reveal two major divisions within the language system – meaning vs. form and recognition vs. production – and their instantiation in the brain. Phonological form deficits are associated with lesions in peri-Sylvian regions, whereas semantic production and recognition deficits are associated with damage to the left anterior temporal lobe and white matter connectivity with frontal cortex, respectively. These findings provide a novel synthesis of traditional and contemporary views of the cognitive and neural architecture of language processing, emphasizing dual-routes for speech processing and convergence of white matter tracts for semantic control and/or integration. PMID:25879574

  15. Neural organization of spoken language revealed by lesion-symptom mapping.

    PubMed

    Mirman, Daniel; Chen, Qi; Zhang, Yongsheng; Wang, Ze; Faseyitan, Olufunsho K; Coslett, H Branch; Schwartz, Myrna F

    2015-01-01

    Studies of patients with acquired cognitive deficits following brain damage and studies using contemporary neuroimaging techniques form two distinct streams of research on the neural basis of cognition. In this study, we combine high-quality structural neuroimaging analysis techniques and extensive behavioural assessment of patients with persistent acquired language deficits to study the neural basis of language. Our results reveal two major divisions within the language system-meaning versus form and recognition versus production-and their instantiation in the brain. Phonological form deficits are associated with lesions in peri-Sylvian regions, whereas semantic production and recognition deficits are associated with damage to the left anterior temporal lobe and white matter connectivity with frontal cortex, respectively. These findings provide a novel synthesis of traditional and contemporary views of the cognitive and neural architecture of language processing, emphasizing dual routes for speech processing and convergence of white matter tracts for semantic control and/or integration. PMID:25879574

  16. Communication through Foreign Languages: An Economic Force in Chinese Enterprises.

    ERIC Educational Resources Information Center

    Hildebrandt, Herbert W.; Liu, Jinyun

    1991-01-01

    Second-language use by Chinese business managers illustrates that second-language competence is driven partly by economic and political forces. Although Russian language knowledge is typical of the older managers, English and Japanese are favored by younger managers, reflecting the wane of Russian political influence and the growing importance of…

  17. Oracle Bones and Mandarin Tones: Demystifying the Chinese Language.

    ERIC Educational Resources Information Center

    Michigan Univ., Ann Arbor. Project on East Asian Studies in Education.

    This publication will provide secondary level students with a basic understanding of the development and structure of Chinese (guo yu) language characters. The authors believe that demystifying the language helps break many cultural barriers. The written language is a good place to begin because its pictographic nature is appealing and inspires…

  18. Educating Teachers of "Chinese as a Local/Global Language": Teaching "Chinese with Australian Characteristics"

    ERIC Educational Resources Information Center

    Singh, Michael; Han, Jinghe

    2014-01-01

    How can the education of teacher-researchers from China be framed in ways so that they might make Chinese learnable for primary and secondary school learners for whom English is their everyday language of instruction and communication. The concept "making Chinese learnable" and the characters of the language learners are explained in the…

  19. Auditory Cortical Activity During Cochlear Implant-Mediated Perception of Spoken Language, Melody, and Rhythm

    PubMed Central

    Molloy, Anne T.; Jiradejvong, Patpong; Braun, Allen R.

    2009-01-01

    Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H215O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study. PMID:19662456

  20. Home Literacy Environment and Its Influence on Singaporean Children's Chinese Oral and Written Language Abilities

    ERIC Educational Resources Information Center

    Li, Li; Tan, Chee Lay

    2016-01-01

    In a bilingual environment such as Singaporean Chinese community, the challenge of maintaining Chinese language and sustaining Chinese culture lies in promoting the daily use of Chinese language in oral and written forms among children. Ample evidence showed the effect of the home language and literacy environment (HLE), on children's language and…

  1. Language-universal sensory deficits in developmental dyslexia: English, Spanish, and Chinese.

    PubMed

    Goswami, Usha; Wang, H-L Sharon; Cruz, Alicia; Fosker, Tim; Mead, Natasha; Huss, Martina

    2011-02-01

    Studies in sensory neuroscience reveal the critical importance of accurate sensory perception for cognitive development. There is considerable debate concerning the possible sensory correlates of phonological processing, the primary cognitive risk factor for developmental dyslexia. Across languages, children with dyslexia have a specific difficulty with the neural representation of the phonological structure of speech. The identification of a robust sensory marker of phonological difficulties would enable early identification of risk for developmental dyslexia and early targeted intervention. Here, we explore whether phonological processing difficulties are associated with difficulties in processing acoustic cues to speech rhythm. Speech rhythm is used across languages by infants to segment the speech stream into words and syllables. Early difficulties in perceiving auditory sensory cues to speech rhythm and prosody could lead developmentally to impairments in phonology. We compared matched samples of children with and without dyslexia, learning three very different spoken and written languages, English, Spanish, and Chinese. The key sensory cue measured was rate of onset of the amplitude envelope (rise time), known to be critical for the rhythmic timing of speech. Despite phonological and orthographic differences, for each language, rise time sensitivity was a significant predictor of phonological awareness, and rise time was the only consistent predictor of reading acquisition. The data support a language-universal theory of the neural basis of developmental dyslexia on the basis of rhythmic perception and syllable segmentation. They also suggest that novel remediation strategies on the basis of rhythm and music may offer benefits for phonological and linguistic development. PMID:20146613

  2. Unlocking Australia's Language Potential: Profiles of 9 Key Languages in Australia. Volume 2, Chinese.

    ERIC Educational Resources Information Center

    Smith, Doug; And Others

    This work is one in a series that focuses on nine languages representing the bulk of the second language learning effort in Australian education (Arabic, Modern Standard Chinese, French, German, Modern Greek, Indonesian/Malay, Italian, Japanese, and Spanish). These languages were categorized as the Languages of Wider Teaching. This particular…

  3. The Interplay between Spoken Language and Informal Definitions of Statistical Concepts

    ERIC Educational Resources Information Center

    Lavy, Ilana; Mashiach-Eizenberg, Michal

    2009-01-01

    Various terms are used to describe mathematical concepts, in general, and statistical concepts, in particular. Regarding statistical concepts in the Hebrew language, some of these terms have the same meaning both in their everyday use and in mathematics, such as Mode; some of them have a different meaning, such as Expected value and Life…

  4. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    PubMed Central

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  5. Parallel language activation and cognitive control during spoken word recognition in bilinguals.

    PubMed

    Blumenfeld, Henrike K; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals' parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300-500ms after word onset was associated with smaller Stroop effects; between 633-767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  6. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    ERIC Educational Resources Information Center

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  7. Vulgaridades del Habla en Grafia (Colloquialisms for the Spoken and the Written Language)

    ERIC Educational Resources Information Center

    Nadal, Rogelio

    1977-01-01

    A warning about the alarming situation in which written Spanish finds itself, not only in the daily press but even in more literary publications. More and more popular expressions and corruptions are finding their way into the written language. Attention to this situation is recommended. (Text is in Spanish.) (AMH)

  8. Changes to English as an Additional Language Writers' Research Articles: From Spoken to Written Register

    ERIC Educational Resources Information Center

    Koyalan, Aylin; Mumford, Simon

    2011-01-01

    The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…

  9. Automatic detection of Parkinson's disease in running speech spoken in three different languages.

    PubMed

    Orozco-Arroyave, J R; Hönig, F; Arias-Londoño, J D; Vargas-Bonilla, J F; Daqrouq, K; Skodda, S; Rusz, J; Nöth, E

    2016-01-01

    The aim of this study is the analysis of continuous speech signals of people with Parkinson's disease (PD) considering recordings in different languages (Spanish, German, and Czech). A method for the characterization of the speech signals, based on the automatic segmentation of utterances into voiced and unvoiced frames, is addressed here. The energy content of the unvoiced sounds is modeled using 12 Mel-frequency cepstral coefficients and 25 bands scaled according to the Bark scale. Four speech tasks comprising isolated words, rapid repetition of the syllables /pa/-/ta/-/ka/, sentences, and read texts are evaluated. The method proves to be more accurate than classical approaches in the automatic classification of speech of people with PD and healthy controls. The accuracies range from 85% to 99% depending on the language and the speech task. Cross-language experiments are also performed confirming the robustness and generalization capability of the method, with accuracies ranging from 60% to 99%. This work comprises a step forward for the development of computer aided tools for the automatic assessment of dysarthric speech signals in multiple languages. PMID:26827042

  10. Neural Processing of Spoken Words in Specific Language Impairment and Dyslexia

    ERIC Educational Resources Information Center

    Helenius, Paivi; Parviainen, Tiina; Paetau, Ritva; Salmelin, Riitta

    2009-01-01

    Young adults with a history of specific language impairment (SLI) differ from reading-impaired (dyslexic) individuals in terms of limited vocabulary and poor verbal short-term memory. Phonological short-term memory has been shown to play a significant role in learning new words. We investigated the neural signatures of auditory word recognition…

  11. The Languages of the United States: What Is Spoken and What It Means.

    ERIC Educational Resources Information Center

    Chiswick, Barry R.; Miller, Paul W.

    1996-01-01

    Provides a statistical portrait of residents of the United States who speak a language other than English, including the degree of their fluency in English. The analysis is based on the microdata files from the 1990 Census of Population. The data reveal that a continuation of current migration patterns will result in both linguistic concentration…

  12. Associations between Chinese Language Classroom Environments and Students' Motivation to Learn the Language

    ERIC Educational Resources Information Center

    Chua, Siew Lian; Wong, Angela F. L.; Chen, Der-Thanq

    2009-01-01

    Associations between the nature of Chinese Language Classroom Environments and Singapore secondary school students' motivation to learn the Chinese Language were investigated. A sample of 1,460 secondary three (grade 9) students from 50 express stream (above average academic ability) classes in Singapore government secondary schools was involved…

  13. Oral narrative context effects on poor readers' spoken language performance: story retelling, story generation, and personal narratives.

    PubMed

    Westerveld, Marleen F; Gillon, Gail T

    2010-04-01

    This investigation explored the effects of oral narrative elicitation context on children's spoken language performance. Oral narratives were produced by a group of 11 children with reading disability (aged between 7;11 and 9;3) and an age-matched control group of 11 children with typical reading skills in three different contexts: story retelling, story generation, and personal narratives. In the story retelling condition, the children listened to a story on tape while looking at the pictures in a book, before being asked to retell the story without the pictures. In the story generation context, the children were shown a picture containing a scene and were asked to make up their own story. Personal narratives were elicited with the help of photos and short narrative prompts. The transcripts were analysed at microstructure level on measures of verbal productivity, semantic diversity, and morphosyntax. Consistent with previous research, the results revealed no significant interactions between group and context, indicating that the two groups of children responded to the type of elicitation context in a similar way. There was a significant group effect, however, with the typical readers showing better performance overall on measures of morphosyntax and semantic diversity. There was also a significant effect of elicitation context with both groups of children producing the longest, linguistically most dense language samples in the story retelling context. Finally, the most significant differences in group performance were observed in the story retelling condition, with the typical readers outperforming the poor readers on measures of verbal productivity, number of different words, and percent complex sentences. The results from this study confirm that oral narrative samples can distinguish between good and poor readers and that the story retelling condition may be a particularly useful context for identifying strengths and weaknesses in oral narrative performance

  14. Predictors of Early Reading Skill in 5-Year-Old Children With Hearing Loss Who Use Spoken Language

    PubMed Central

    Ching, Teresa Y.C.; Crowe, Kathryn; Day, Julia; Seeto, Mark

    2013-01-01

    This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 5-year-old children with prelingual hearing losses ranging from mild to profound who communicated primarily using spoken language. All participants were fitted with hearing aids (n = 71) or cochlear implants (n = 30). They completed standardized assessments of PA, receptive vocabulary, letter knowledge, word and non-word reading, passage comprehension, math reasoning, and nonverbal cognitive ability. Multiple regressions revealed that PA (assessed using judgments of similarity based on words’ initial or final sounds) made a significant, independent contribution to children’s early reading ability (for both letters and words/non-words) after controlling for variation in receptive vocabulary, nonverbal cognitive ability, and a range of demographic variables (including gender, degree of hearing loss, communication mode, type of sensory device, age at fitting of sensory devices, and level of maternal education). Importantly, the relationship between PA and reading was specific to reading and did not generalize to another academic ability, math reasoning. Additional multiple regressions showed that letter knowledge (names or sounds) was superior in children whose mothers had undertaken post-secondary education, and that better receptive vocabulary was associated with less severe hearing loss, use of a cochlear implant, and earlier age at implant switch-on. Earlier fitting of hearing aids or cochlear implants was not, however, significantly associated with better PA or reading outcomes in this cohort of children, most of whom were fitted with sensory devices before 3 years of age. PMID:24563553

  15. Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension

    PubMed Central

    Riordan, Brian; Dye, Melody; Jones, Michael N.

    2015-01-01

    Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions.) and interrogative (Where are the lions?) sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2) and in a task using mixed sentence types (Experiment 3). We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing. PMID:25999900

  16. Distinguishing Features in Scoring L2 Chinese Speaking Performance: How Do They Work?

    ERIC Educational Resources Information Center

    Jin, Tan; Mak, Barley

    2013-01-01

    For Chinese as a second language (L2 Chinese), there has been little research into "distinguishing features" (Fulcher, 1996; Iwashita et al., 2008) used in scoring L2 Chinese speaking performance. The study reported here investigates the relationship between the distinguishing features of L2 Chinese spoken performances and the scores awarded by…

  17. 75 FR 5767 - All Terrain Vehicle Chinese Language Webinar; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-04

    ... From the Federal Register Online via the Government Publishing Office CONSUMER PRODUCT SAFETY COMMISSION All Terrain Vehicle Chinese Language Webinar; Meeting AGENCY: Consumer Product Safety Commission. ACTION: Notice. The Consumer Product Safety Commission (CPSC) is announcing the following meeting:...

  18. Measuring social desirability across language and sex: A comparison of Marlowe-Crowne Social Desirability Scale factor structures in English and Mandarin Chinese in Malaysia.

    PubMed

    Kurz, A Solomon; Drescher, Christopher F; Chin, Eu Gene; Johnson, Laura R

    2016-06-01

    Malaysia is a Southeast Asian country in which multiple languages are prominently spoken, including English and Mandarin Chinese. As psychological science continues to develop within Malaysia, there is a need for psychometrically sound instruments that measure psychological phenomena in multiple languages. For example, assessment tools for measuring social desirability could be a useful addition in psychological assessments and research studies in a Malaysian context. This study examined the psychometric performance of the English and Mandarin Chinese versions of the Marlowe-Crowne Social Desirability Scale when used in Malaysia. Two hundred and eighty-three students (64% female; 83% Chinese, 9% Indian) from two college campuses completed the Marlowe-Crowne Social Desirability Scale in their language of choice (i.e., English or Mandarin Chinese). Proposed factor structures were compared with confirmatory factor analysis, and multiple indicators-multiple causes models were used to examine measurement invariance across language and sex. Factor analyses supported a two-factor structure (i.e., Attribution and Denial) for the measure. Invariance tests revealed the scale was invariant by sex, indicating that social desirability can be interpreted similarly across sex. The scale was partially invariant by language version, with some non-invariance observed within the Denial factor. Non-invariance may be related to differences in the English and Mandarin Chinese languages, as well as cultural differences. Directions for further research include examining the measurement of social desirability in other contexts where both English and Mandarin Chinese are spoken (i.e., China) and further examining the causes of non-invariance on specific items. PMID:27168227

  19. The Application of Contrastive Analysis to Chinese Language Teaching.

    ERIC Educational Resources Information Center

    Huang, Li-yi

    This includes a contrastive study of English and Chinese noun phrases, verbal phrases, and word order and discusses common mistakes made by English speakers learning Chinese. Mistakes often made by English speakers due to differences between the two languages are divided into three categories: the first is mistakes in word order where the English…

  20. Things happen: Individuals with high obsessive-compulsive tendencies omit agency in their spoken language.

    PubMed

    Oren, Ela; Friedmann, Naama; Dar, Reuven

    2016-05-01

    The study examined the prediction that obsessive-compulsive tendencies are related to an attenuated sense of agency (SoA). As most explicit agency judgments are likely to reflect also motivation for and expectation of control, we examined agency in sentence production. Reduced agency can be expressed linguistically by omitting the agent or by using grammatical framings that detach the event from the entity that caused it. We examined the use of agentic language of participants with high vs. low scores on a measure of obsessive-compulsive (OC) symptoms, using structured linguistic tasks in which sentences are elicited in a conversation-like setting. As predicted, high OC individuals produced significantly more non-agentic sentences than low OC individuals, using various linguistic strategies. The results suggest that OC tendencies are related to attenuated SoA. We discuss the implications of these findings for explicating the SoA in OCD and the potential contribution of language analysis for understanding psychopathology. PMID:27003263

  1. The Study of Chinese in a Provincial French Secondary School

    ERIC Educational Resources Information Center

    Dutrait, Noel

    1978-01-01

    A discussion of Chinese instruction in a Bordeaux lycee. Specific questions addressed are: enrollment; reasons for studying Chinese; the study of the written and spoken language; the need for an appropriate textbook; teaching culture; and the need to organize Chinese as a regular course in the curriculum. (AMH)

  2. Dissimilation in the Second Language Acquisition of Mandarin Chinese Tones

    ERIC Educational Resources Information Center

    Zhang, Hang

    2016-01-01

    This article extends Optimality Theoretic studies to the research on second language tone phonology. Specifically, this work analyses the acquisition of identical tone sequences in Mandarin Chinese by adult speakers of three non-tonal languages: English, Japanese and Korean. This study finds that the learners prefer not to use identical lexical…

  3. Information Technology and Chinese Language Instruction: A Search for Standards.

    ERIC Educational Resources Information Center

    Alber, Charles J.

    1989-01-01

    Future technological advancements in computers will soon provide high-definition video and graphics ability to simulate language learning situations. Trends in technological options, networks, word processing, and database management are explored, and their usefulness to Chinese language study is discussed. (59 references) (CB)

  4. Implementation of Task-Based Language Teaching in Chinese as a Foreign Language: Benefits and Challenges

    ERIC Educational Resources Information Center

    Bao, Rui; Du, Xiangyun

    2015-01-01

    Task-based language teaching (TBLT) has been drawing increased attention from language teachers and researchers in the past decade. This paper focuses on the effects of TBLT on beginner learners of Chinese as a foreign language (CFL) in Denmark. Participatory observation and semi-structured interviews were carried out with 18 participants from two…

  5. Ethnic Contestation and Language Policy in a Plural Society: The Chinese Language Movement in Malaysia, 1952-1967

    ERIC Educational Resources Information Center

    Yao Sua, Tan; Hooi See, Teoh

    2014-01-01

    The Chinese language movement was launched by the Chinese educationists to demand the recognition of Chinese as an official language to legitimise the status of Chinese education in the national education system in Malaysia. It began in 1952 as a response to the British attempt to establish national primary schools teaching in English and Malay to…

  6. The relationship between spoken language and speech and nonspeech processing in children with autism: a magnetic event-related field study.

    PubMed

    Yau, Shu Hui; Brock, Jon; McArthur, Genevieve

    2016-09-01

    It has been proposed that language impairments in children with Autism Spectrum Disorders (ASD) stem from atypical neural processing of speech and/or nonspeech sounds. However, the strength of this proposal is compromised by the unreliable outcomes of previous studies of speech and nonspeech processing in ASD. The aim of this study was to determine whether there was an association between poor spoken language and atypical event-related field (ERF) responses to speech and nonspeech sounds in children with ASD (n = 14) and controls (n = 18). Data from this developmental population (ages 6-14) were analysed using a novel combination of methods to maximize the reliability of our findings while taking into consideration the heterogeneity of the ASD population. The results showed that poor spoken language scores were associated with atypical left hemisphere brain responses (200 to 400 ms) to both speech and nonspeech in the ASD group. These data support the idea that some children with ASD may have an immature auditory cortex that affects their ability to process both speech and nonspeech sounds. Their poor speech processing may impair their ability to process the speech of other people, and hence reduce their ability to learn the phonology, syntax, and semantics of their native language. PMID:27146167

  7. Variability in Chinese as a Foreign Language Learners' Development of the Chinese Numeral Classifier System

    ERIC Educational Resources Information Center

    Zhang, Jie; Lu, Xiaofei

    2013-01-01

    This study examined variability in Chinese as a Foreign Language (CFL) learners' development of the Chinese numeral classifier system from a dynamic systems approach. Our data consisted of a longitudinal corpus of 657 essays written by CFL learners at lower and higher intermediate levels and a corpus of 100 essays written by native speakers (NSs)…

  8. Spoken Dutch.

    ERIC Educational Resources Information Center

    Bloomfield, Leonard

    This course in spoken Dutch is intended for use in introductory conversational classes. The book is divided into five major parts, each containing five learning units and one unit devoted to review. Each unit contains sections including (1) basic sentences, (2) word study and review of basic sentences, (3) listening comprehension, and (4)…

  9. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    PubMed Central

    Zhang, Qingfang; Wang, Cheng

    2014-01-01

    The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed. PMID:24600420

  10. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    PubMed

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. PMID:26669620

  11. Conceptions of Language Teaching of Chinese-Born Language Teachers

    ERIC Educational Resources Information Center

    Packevicz, Michael Joseph, Jr.

    2012-01-01

    In the educational world, there is a growing amount of scholarship directed toward China. Much Western writing has focused on the supposedly deficient aspects of Chinese education: rote memorization, large class size, teacher-centered methodology. While there are those who challenge the deficiency view of Chinese education--Watkins and Biggs, Jin…

  12. How Vocabulary Size in Two Languages Relates to Efficiency in Spoken Word Recognition by Young Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language…

  13. The Influence of Chinese Character Handwriting Diagnosis and Remedial Instruction System on Learners of Chinese as a Foreign Language

    ERIC Educational Resources Information Center

    Hsiao, Hsien-Sheng; Chang, Cheng-Sian; Chen, Chiao-Jia; Wu, Chia-Hou; Lin, Chien-Yu

    2015-01-01

    This study designed and developed a Chinese character handwriting diagnosis and remedial instruction (CHDRI) system to improve Chinese as a foreign language (CFL) learners' ability to write Chinese characters. The CFL learners were given two tests based on the CHDRI system. One test focused on Chinese character handwriting to diagnose the CFL…

  14. Japanese Speakers' Second Language Chinese Wh-Questions: A Lexical Morphological Feature Deficit Account

    ERIC Educational Resources Information Center

    Yuan, Boping

    2007-01-01

    In this article, an empirical study of how Chinese wh-questions are mentally represented in Japanese speakers' grammars of Chinese as a second language (L2) is reported. Both Chinese and Japanese are generally considered "wh-in-situ" languages in which a wh-word is allowed to remain in its base-generated position, and both languages use question…

  15. Keys to Chinese Character Writing. Step-by-Step Directions to Writing Characters Quickly and Easily.

    ERIC Educational Resources Information Center

    Ma, Jing Heng Sheng

    The most interesting and challenging aspect of studying Chinese is writing Chinese characters. Unfortunately, the learning of Chinese characters receives only marginal attention in a typical classroom. Given that vocabularies in textbooks are based on spoken language rather than the principles of character formation, and also given the pressures…

  16. Children’s Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

    PubMed Central

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P.; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2016-01-01

    Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language. PMID:26834665

  17. Children's Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time.

    PubMed

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2015-01-01

    Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants' first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language. PMID:26834665

  18. Text Memorisation in Chinese Foreign Language Education

    ERIC Educational Resources Information Center

    Yu, Xia

    2012-01-01

    In China, a widespread learning practice for foreign languages are reading, reciting and memorising texts. This book investigates this practice against a background of Confucian heritage learning and western attitudes towards memorising, particularly audio-lingual approaches to language teaching and later largely negative attitudes. The author…

  19. Dyslexia in Chinese Language: An Overview of Research and Practice

    ERIC Educational Resources Information Center

    Chung, Kevin K. H.; Ho, Connie S. H.

    2010-01-01

    Dyslexia appears to be the most prevalent disability of students with special educational needs in many mainstream classes, affecting around 9.7% of the school population in Hong Kong. The education of these students is therefore of great concern to the community. In the present paper research into dyslexia in the Chinese language is briefly…

  20. LIST OF CHINESE DICTIONARIES IN ALL LANGUAGES. EXTERNAL RESEARCH PAPER.

    ERIC Educational Resources Information Center

    Department of State, Washington, DC.

    A COMPILATION FROM LISTS OF DICTIONARIES USED BY SEVERAL U.S. GOVERNMENT ORGANIZATIONS, THIS DOCUMENT INCLUDES THE TITLES OF AND INFORMATION CONCERNING DICTIONARIES COVERING OVER 25 TOPICS IN THE SCIENTIFIC AND TECHNICAL FIELDS, AND NUMEROUS AREAS OF ECONOMICS, AND POLITICAL, AND SOCIOLOGICAL TOPICS. MANY CHINESE-FOREIGN LANGUAGE DICTIONARIES ARE…

  1. Designing ICT Training Material for Chinese Language Arts Teachers.

    ERIC Educational Resources Information Center

    Lin, Janet Mei-Chuen; Wu, Cheng-Chih; Chen, Hsiu-Yen

    The purpose of this research is to tailor the design of information and communications technology (ICT) training material to the needs of Chinese language arts teachers such that the training they receive will be conducive to effective integration of ICT into instruction. Eighteen experienced teachers participated in a Delphi-like survey that…

  2. Processing Relative Clauses in Chinese as a Second Language

    ERIC Educational Resources Information Center

    Xu, Yi

    2014-01-01

    This project investigates second language (L2) learners' processing of four types of Chinese relative clauses crossing extraction types and demonstrative-classifier (DCl) positions. Using a word order judgment task with a whole-sentence reading technique, the study also discusses how psycholinguistic theories bear explanatory power in L2…

  3. Chinese Language Study Abroad in the Summer, 1990. Final Report.

    ERIC Educational Resources Information Center

    Thompson, Richard T.

    After an analysis of the changing numbers of Americans studying Chinese abroad and of Sino-American academic exchanges after the Tiananmen events of 1989, this paper reports on visits to summer language programs. Enrollments were down by 13 percent between the summer of 1988 and 1989, but down by 50 percent between 1989 and 1990. The following…

  4. Engaging a "Truly Foreign" Language and Culture: China through Chinese Film

    ERIC Educational Resources Information Center

    Ning, Cynthia

    2009-01-01

    In this article, the author shares how she uses Chinese film in her Chinese language and culture classes. She demonstrates how Chinese films can help students "navigate the uncharted universe of Chinese culture" with reference to several contemporary Chinese films. She describes how intensive viewing of films can develop a deeper and more nuanced…

  5. Invisible and Visible Language Planning: Ideological Factors in the Family Language Policy of Chinese Immigrant Families in Quebec

    ERIC Educational Resources Information Center

    Curdt-Christiansen, Xiao Lan

    2009-01-01

    This ethnographic inquiry examines how family languages policies are planned and developed in ten Chinese immigrant families in Quebec, Canada, with regard to their children's language and literacy education in three languages, Chinese, English, and French. The focus is on how multilingualism is perceived and valued, and how these three languages…

  6. Self-ratings of Spoken Language Dominance: A Multi-Lingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals*

    PubMed Central

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2014-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and named pictures in a Multilingual Naming Test (MINT, and in Experiment 1 also the Boston Naming Test; BNT). Self-ratings, proficiency interview, and the MINT did not differ significantly in classifying bilinguals into language-dominance groups, but naming tests (especially the BNT) classified bilinguals as more English-dominant than other measures. Strong correlations were observed between measures of proficiency in each language and language-dominance, but not degree of balanced bilingualism (index scores). Depending on the measure, up to 60% of bilinguals scored best in their self-reported non-dominant language. The BNT distorted bilingual assessment by underestimating ability in Spanish. These results illustrate what self-ratings can and cannot provide, illustrate the pitfalls of testing bilinguals with measures designed for monolinguals, and invite a multi-measure goal driven approach to classifying bilinguals into dominance groups. PMID:25364296

  7. Language spoken at home and the association between ethnicity and doctor–patient communication in primary care: analysis of survey data for South Asian and White British patients

    PubMed Central

    Brodie, Kara; Abel, Gary

    2016-01-01

    Objectives To investigate if language spoken at home mediates the relationship between ethnicity and doctor–patient communication for South Asian and White British patients. Methods We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner–patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. Results There was strong evidence of an association between doctor–patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0–100) than White British patients (95% CI −4.9 to −1.1, p=0.002). This difference reduced to 1.4 points (95% CI −3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI −6.4 to −0.2). Conclusions South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. PMID:26940108

  8. Asia Society's Ongoing Chinese Language Initiatives

    ERIC Educational Resources Information Center

    Livaccari, Chris; Wang, Jeff

    2009-01-01

    Asia Society remains committed to promoting the teaching and learning of Chinese in American schools as an integral part of the broader agenda of building students' global competency, the key goal of its Partnership for Global Learning. Under the leadership of Asia Society's new Vice President for Education Tony Jackson and with continuing…

  9. Chinese Language Video Clips. [CD-ROM].

    ERIC Educational Resources Information Center

    Fleming, Stephen; Hipley, David; Ning, Cynthia

    This compact disc includes video clips covering six topics for the learner of Chinese: personal information, commercial transactions, travel and leisure, health and sports, food and school. Filmed on location in Beijing, these naturalistic video clips consist mainly of unrehearsed interviews of ordinary people. The learner is lead through a series…

  10. Spoken name pronunciation evaluation

    NASA Astrophysics Data System (ADS)

    Tepperman, Joseph; Narayanan, Shrikanth

    2004-10-01

    Recognition of spoken names is an important ASR task since many speech applications can be associated with it. However, the task is also among the most difficult ones due to the large number of names, their varying origins, and the multiple valid pronunciations of any given name, largely dependent upon the speaker's mother tongue and familiarity with the name. In order to explore the speaker- and language-dependent pronunciation variability issues present in name pronunciation, a spoken name database was collected from 101 speakers with varying native languages. Each speaker was asked to pronounce 80 polysyllabic names, uniformly chosen from ten language origins. In preliminary experiments, various prosodic features were used to train Gaussian mixture models (GMMs) to identify misplaced syllabic emphasis within the name, at roughly 85% accuracy. Articulatory features (voicing, place, and manner of articulation) derived from MFCCs were also incorporated for that purpose. The combined prosodic and articulatory features were used to automatically grade the quality of name pronunciation. These scores can be used to provide meaningful feedback to foreign language learners. A detailed description of the name database and some preliminary results on the accuracy of detecting misplaced stress patterns will be reported.