Science.gov

Sample records for chinese spoken language

  1. Sign Language Versus Spoken Language

    ERIC Educational Resources Information Center

    Stokoe, William C.

    1978-01-01

    In the debate over continuities versus discontinuities in the emergence of language, sign language is not taken to be the antithesis, but is presented as the antecedent of spoken languages. (Author/HP)

  2. The Spoken Language.

    ERIC Educational Resources Information Center

    Yule, George

    1989-01-01

    An analysis is made of research focusing on how speaking and listening activities are relevant to the language classroom. Current thinking is reviewed on spoken language, with a focus on pronunciation, as a medium of information transfer and of interpersonal exchange. (66 references) (GLR)

  3. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  4. Automatic translation among spoken languages

    NASA Astrophysics Data System (ADS)

    Walter, Sharon M.; Costigan, Kelly

    1994-02-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  5. Teaching the Spoken Language.

    ERIC Educational Resources Information Center

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  6. Teaching the Spoken Language.

    ERIC Educational Resources Information Center

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  7. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  8. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  9. Spoken Language Phonotactics.

    ERIC Educational Resources Information Center

    Hieke, A. E.

    The transformation that language undergoes when it becomes speech is examined in English. Statistical analysis of a representative sample of natural, informal speech reveals a number of characteristics of dynamic speech that distinguish it from static (citation form or pre-dynamic) linguistic form. It appears that in running speech, vowels and…

  10. Orthographic effects in spoken word recognition: Evidence from Chinese.

    PubMed

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  11. Predictors of spoken language learning.

    PubMed

    Wong, Patrick C M; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We found those who were successful to have higher activation in bilateral auditory cortex, larger volume in Heschl's Gyrus, and more accurate pitch pattern perception. All of these measures were performed before training began. In the second set of experiments, native English-speaking adults learned a phonological grammatical system governing the formation of words of an artificial language. Again, neurophysiological, neuroanatomical, and cognitive factors predicted to an extent how well these adults learned. Taken together, these experiments suggest that neural and behavioral factors can be used to predict spoken language learning. These predictors can inform the redesign of existing training paradigms to maximize learning for learners with different learning profiles. Readers will be able to: (a) understand the linguistic concepts of lexical tone and phonological grammar, (b) identify the brain regions associated with learning lexical tone and phonological grammar, and (c) identify the cognitive predictors for successful learning of a tone language and phonological rules. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Recent Changes in the Spoken Polish Language.

    ERIC Educational Resources Information Center

    Birkenmayer, Sigmund S.

    Both spoken and written Polish have undergone profound changes during the past twenty-eight years. The increasing urbanization of Polish culture and the forced change in Polish society are the main factors influencing the change in the language. Indirect evidence of changes which have occurred in the vocabulary and idioms of spoken Polish in the…

  13. Building Spoken Language in the First Plane

    ERIC Educational Resources Information Center

    Bettmann, Joen

    2016-01-01

    Through a strong Montessori orientation to the parameters of spoken language, Joen Bettmann makes the case for "materializing" spoken knowledge using the stimulation of real objects and real situations that promote mature discussion around the sensorial aspect of the prepared environment. She lists specific materials in the classroom…

  14. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    ERIC Educational Resources Information Center

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  15. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    ERIC Educational Resources Information Center

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  16. Malay: A Guide to the Spoken Language.

    ERIC Educational Resources Information Center

    Department of Defense, Washington, DC.

    This Malay language guide is of assistance in carrying on simple conversations in Malay and is used in conjunction with records. Malay is spoken by people in Malaya, Sumatra, Java, and Borneo and is widely used as a trade language throughout the Netherlands East Indies. The variety of Malay used in the guide (called Low, Bazaar, or Market Malay)…

  17. Direction Asymmetries in Spoken and Signed Language Interpreting

    ERIC Educational Resources Information Center

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  18. Development of a Spoken Language System

    DTIC Science & Technology

    1992-04-01

    military logistical transportation planning domain. " Created a videotape to illustrate these capabilities and some potential applications of spoken...34 Participated in all evaluation tests. 5 We now give a brief description of these highlights. I During this project we modified our natural language system...processor. (For this part of our system, we have found a value of N=5 to give best understanding results.) The natural language system processes these

  19. Towards Environment-Independent Spoken Language Systems

    DTIC Science & Technology

    1990-01-01

    Towards Environment-Independent Spoken Language Systems Alejandro Acero and Richard M. Stern Department of Electrical and Computer Engineering...applications of spectral subtraction and spectral equaliza- tion for speech recognition systems include the work of Van Compemolle [5] and Stem and Acero [12... Acero and Stem [1] proposed an approach to environment normalization in the cepstral domain, going beyond the noise stripping problem. In this paper we

  20. Predictors of Spoken Language Learning

    ERIC Educational Resources Information Center

    Wong, Patrick C. M.; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We…

  1. Spoken Grammar and Its Role in the English Language Classroom

    ERIC Educational Resources Information Center

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  2. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    ERIC Educational Resources Information Center

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  3. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    ERIC Educational Resources Information Center

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  4. What do second language listeners know about spoken words? Effects of experience and attention in spoken word processing.

    PubMed

    Trofimovich, Pavel

    2008-09-01

    With a goal of investigating psycholinguistic bases of spoken word processing in a second language (L2), this study examined L2 learners' sensitivity to phonological information in spoken L2 words as a function of their L2 experience and attentional demands of a learning task. Fifty-two Chinese learners of English who differed in amount of L2 experience (longer vs. shorter residence in L2 environment) were tested in an auditory word priming experiment on well-known L2 words under two processing orientation conditions (semantic, control). Results revealed that, with more L2 experience, learners become more sensitive to phonological detail in spoken L2 words but that attention to word meaning might eliminate this sensitivity, even for learners with more L2 experience.

  5. Deep bottleneck features for spoken language identification.

    PubMed

    Jiang, Bing; Song, Yan; Wei, Si; Liu, Jun-Hua; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  6. Deep Bottleneck Features for Spoken Language Identification

    PubMed Central

    Jiang, Bing; Song, Yan; Wei, Si; Liu, Jun-Hua; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed. PMID:24983963

  7. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention

    ERIC Educational Resources Information Center

    Medwetsky, Larry

    2011-01-01

    Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…

  8. Nonlinear filtration of the spoken language signals

    NASA Astrophysics Data System (ADS)

    Kolchenko, Lilia V.; Sinitsyn, Rustem B.

    2009-06-01

    Work is devoted to important topic of acoustic signals processing in a pilot's cabin of aircraft in which the high noise level is observed. We have investigated heuristic approach of acoustic signal nonlinear filtration. First of all the kernel estimate of the cumulative distribution function was done. The signal was transformed using the estimate of the cumulative distribution function as a functional transform. Then measurements of acoustic signals' parameters and an estimation of their spectral density were done. The estimation was measured by means of fast Fourier transform procedure with use of window functions. At the second stage the new procedure of the adaptive filtration based on the Wiener frequency approach has been offered. The estimations of spectra received at the first stage have been thus used. Results are confirmed by experimental processing of spoken language signals.

  9. The Development of Phonological Awareness: Effects of Spoken Language Experience and Orthography.

    ERIC Educational Resources Information Center

    Cheung, Him; Chen, Hsuan-Chih; Lai, Chun Yip; Wong, On Chi; Hills, Melanie

    2001-01-01

    Compared phonological awareness of younger prereading children and older literate children from different linguistic backgrounds (Hong Kong and Guangzhou children speaking Chinese and New Zealand children speaking English). Found that both orthographic and spoken language experience affect the development of phonological skills, implying a…

  10. Spoken Language Research and ELT: Where Are We Now?

    ERIC Educational Resources Information Center

    Timmis, Ivor

    2012-01-01

    This article examines the relationship between spoken language research and ELT practice over the last 20 years. The first part is retrospective. It seeks first to capture the general tenor of recent spoken research findings through illustrative examples. The article then considers the sociocultural issues that arose when the relevance of these…

  11. Spoken Language Research and ELT: Where Are We Now?

    ERIC Educational Resources Information Center

    Timmis, Ivor

    2012-01-01

    This article examines the relationship between spoken language research and ELT practice over the last 20 years. The first part is retrospective. It seeks first to capture the general tenor of recent spoken research findings through illustrative examples. The article then considers the sociocultural issues that arose when the relevance of these…

  12. Spoken Language Production in Young Adults: Examining Syntactic Complexity

    ERIC Educational Resources Information Center

    Nippold, Marilyn A.; Frantz-Kaspar, Megan W.; Vigeland, Laura M.

    2017-01-01

    Purpose: In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language…

  13. Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on languages spoken by English learners (ELs) are: (1) Twenty most common EL languages, as reported in states' top five lists: SY 2013-14; (2) States,…

  14. Spoken Word Recognition of Chinese Words in Continuous Speech.

    PubMed

    Yip, Michael C W

    2015-12-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations of syllable boundaries in speech. Two word-spotting experiments were conducted to investigate the role of positional probability in the spoken word recognition process of Cantonese speech. It was found that listeners indeed made use of the positional probability of a syllable's onset but not of a syllable's ending sound in the spoken word recognition process. Together with other relevant studies in different languages, we propose that probabilistic phonotactics are one useful source of information in the spoken word recognition and speech segmentation process.

  15. "Visual" Cortex Responds to Spoken Language in Blind Children.

    PubMed

    Bedny, Marina; Richardson, Hilary; Saxe, Rebecca

    2015-08-19

    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.

  16. Orthographic Facilitation Effects on Spoken Word Production: Evidence from Chinese

    ERIC Educational Resources Information Center

    Zhang, Qingfang; Weekes, Brendan Stuart

    2009-01-01

    The aim of this experiment was to investigate the time course of orthographic facilitation on picture naming in Chinese. We used a picture-word paradigm to investigate orthographic and phonological facilitation on monosyllabic spoken word production in native Mandarin speakers. Both the stimulus-onset asynchrony (SOA) and the picture-word…

  17. Developmental Differences in the Influence of Phonological Similarity on Spoken Word Processing in Mandarin Chinese

    PubMed Central

    Malins, Jeffrey G.; Gao, Danqi; Tao, Ran; Booth, James R.; Shu, Hua; Joanisse, Marc F.; Liu, Li; Desroches, Amy S.

    2014-01-01

    The developmental trajectory of spoken word recognition has been well established in Indo-European languages, but to date remains poorly characterized in Mandarin Chinese. In this study, typically developing children (N = 17; mean age 10;5) and adults (N = 17; mean age 24) performed a picture-word matching task in Mandarin while we recorded ERPs. Mismatches diverged from expectations in different components of the Mandarin syllable; namely, word-initial phonemes, word-final phonemes, and tone. By comparing responses to different mismatch types, we uncovered evidence suggesting that both children and adults process words incrementally. However, we also observed key developmental differences in how subjects treated onset and rime mismatches. This was taken as evidence for a stronger influence of top-down processing on spoken word recognition in adults compared to children. This work therefore offers an important developmental component to theories of Mandarin spoken word recognition. PMID:25278419

  18. Developmental differences in the influence of phonological similarity on spoken word processing in Mandarin Chinese.

    PubMed

    Malins, Jeffrey G; Gao, Danqi; Tao, Ran; Booth, James R; Shu, Hua; Joanisse, Marc F; Liu, Li; Desroches, Amy S

    2014-11-01

    The developmental trajectory of spoken word recognition has been well established in Indo-European languages, but to date remains poorly characterized in Mandarin Chinese. In this study, typically developing children (N=17; mean age 10; 5) and adults (N=17; mean age 24) performed a picture-word matching task in Mandarin while we recorded ERPs. Mismatches diverged from expectations in different components of the Mandarin syllable; namely, word-initial phonemes, word-final phonemes, and tone. By comparing responses to different mismatch types, we uncovered evidence suggesting that both children and adults process words incrementally. However, we also observed key developmental differences in how subjects treated onset and rime mismatches. This was taken as evidence for a stronger influence of top-down processing on spoken word recognition in adults compared to children. This work therefore offers an important developmental component to theories of Mandarin spoken word recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Sharing Spoken Language: Sounds, Conversations, and Told Stories

    ERIC Educational Resources Information Center

    Birckmayer, Jennifer; Kennedy, Anne; Stonehouse, Anne

    2010-01-01

    Infants and toddlers encounter numerous spoken story experiences early in their lives: conversations, oral stories, and language games such as songs and rhymes. Many adults are even surprised to learn that children this young need these kinds of natural language experiences at all. Adults help very young children take a step along the path toward…

  20. Sharing Spoken Language: Sounds, Conversations, and Told Stories

    ERIC Educational Resources Information Center

    Birckmayer, Jennifer; Kennedy, Anne; Stonehouse, Anne

    2010-01-01

    Infants and toddlers encounter numerous spoken story experiences early in their lives: conversations, oral stories, and language games such as songs and rhymes. Many adults are even surprised to learn that children this young need these kinds of natural language experiences at all. Adults help very young children take a step along the path toward…

  1. Listening, Language, and Learning: Skills of Highly Qualified Listening and Spoken Language Specialists in Educational Settings

    ERIC Educational Resources Information Center

    Estes, Ellen L.

    2010-01-01

    The knowledge base required of Listening and Spoken Language Specialists (LSLS) has been outlined in AG Bell Academy for Listening and Spoken Language publications and is available to applicants through these publications and the Academy website (www.agbellacademy.org). The LSLS certification exam serves to evaluate the applicant's breadth of…

  2. Does textual feedback hinder spoken interaction in natural language?

    PubMed

    Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois

    2010-01-01

    The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.

  3. Modular fuzzy-neuro controller driven by spoken language commands.

    PubMed

    Pulasinghe, Koliya; Watanabe, Keigo; Izumi, Kiyotaka; Kiguchi, Kazuo

    2004-02-01

    We present a methodology of controlling machines using spoken language commands. The two major problems relating to the speech interfaces for machines, namely, the interpretation of words with fuzzy implications and the out-of-vocabulary (OOV) words in natural conversation, are investigated. The system proposed in this paper is designed to overcome the above two problems in controlling machines using spoken language commands. The present system consists of a hidden Markov model (HMM) based automatic speech recognizer (ASR), with a keyword spotting system to capture the machine sensitive words from the running utterances and a fuzzy-neural network (FNN) based controller to represent the words with fuzzy implications in spoken language commands. Significance of the words, i.e., the contextual meaning of the words according to the machine's current state, is introduced to the system to obtain more realistic output equivalent to users' desire. Modularity of the system is also considered to provide a generalization of the methodology for systems having heterogeneous functions without diminishing the performance of the system. The proposed system is experimentally tested by navigating a mobile robot in real time using spoken language commands.

  4. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  5. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    ERIC Educational Resources Information Center

    Geers, Anne E.; Nicholas, Johanna G.

    2013-01-01

    Purpose: In this article, the authors sought to determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12 and 38 months of age. Relative advantages of receiving a bilateral CI after age 4.5 years, better…

  6. Properties of Spoken and Written Language. Technical Report No. 5.

    ERIC Educational Resources Information Center

    Chafe, Wallace; Danielwicz, Jane

    To find differences and similarities between spoken and written English, analyses were made of four specific kinds of language. Twenty adults, either graduate students or university professors, provided a sample of each of the following: conversations, lectures, informal letters, and academic papers. Conversations and lecture samples came from…

  7. Riverside English: The Spoken Language of a Southern California Community.

    ERIC Educational Resources Information Center

    Metcalf, Allan A.; And Others

    This booklet points out some of the characteristics of the varieties of English spoken in Riverside and in the rest of California. The first chapter provides a general discussion of language variation and change on the levels of vocabulary, pronunciation, and grammar. The second chapter discusses California English and pronunciation and vocabulary…

  8. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  9. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    ERIC Educational Resources Information Center

    Geers, Anne E.; Nicholas, Johanna G.

    2013-01-01

    Purpose: In this article, the authors sought to determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12 and 38 months of age. Relative advantages of receiving a bilateral CI after age 4.5 years, better…

  10. Planum temporale: where spoken and written language meet.

    PubMed

    Nakada, T; Fujii, Y; Yoneoka, Y; Kwee, I L

    2001-01-01

    Functional magnetic resonance imaging studies on spoken versus written language processing were performed in 20 right-handed normal volunteers on a high-field (3.0-tesla) system. The areas activated in common by both auditory (listening) and visual (reading) language comprehension paradigms were mapped onto the planum temporale (20/20), primary auditory region (2/20), superior temporal sulcus area (2/20) and planum parietale (3/20). The study indicates that the planum temporale represents a common traffic area for cortical processing which needs to access the system of language comprehension. The destruction of this area can result in comprehension deficits in both spoken and written language, i.e. a classical case of Wernicke's aphasia.

  11. Spoken Oral Language and Adult Struggling Readers

    ERIC Educational Resources Information Center

    Bakhtiari, Dariush; Greenberg, Daphne; Patton-Terry, Nicole; Nightingale, Elena

    2015-01-01

    Oral language is a critical component to the development of reading acquisition. Much of the research concerning the relationship between oral language and reading ability is focused on children, while there is a paucity of research focusing on this relationship for adults who struggle with their reading. Oral language as defined in this paper…

  12. Spoken Oral Language and Adult Struggling Readers

    ERIC Educational Resources Information Center

    Bakhtiari, Dariush; Greenberg, Daphne; Patton-Terry, Nicole; Nightingale, Elena

    2015-01-01

    Oral language is a critical component to the development of reading acquisition. Much of the research concerning the relationship between oral language and reading ability is focused on children, while there is a paucity of research focusing on this relationship for adults who struggle with their reading. Oral language as defined in this paper…

  13. Cognitive aging and hearing acuity: modeling spoken language comprehension

    PubMed Central

    Wingfield, Arthur; Amichetti, Nicole M.; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of “local” theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled. PMID:26124724

  14. Spoken language outcomes after hemispherectomy: factoring in etiology.

    PubMed

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology.

  15. Cognitive aging and hearing acuity: modeling spoken language comprehension.

    PubMed

    Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.

  16. Spoken Language Development in Children Following Cochlear Implantation

    PubMed Central

    Niparko, John K.; Tobey, Emily A.; Thal, Donna J.; Eisenberg, Laurie S.; Wang, Nae-Yuh; Quittner, Alexandra L.; Fink, Nancy E.

    2010-01-01

    Context Cochlear implantation (CI) is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe-to-profound sensorineural hearing loss (SNHL). Objective To prospectively assess spoken language acquisition following CI in young children with adjustment of co-variates. Design, Setting, and Participants Prospective, longitudinal, and multidimensional assessment of spoken language growth over a 3-year period following CI. Prospective cohort study of children who underwent CI before 5 years of age (n=188) from 6 US centers and hearing children of similar ages (n=97) from 2 preschools recruited between November, 2002 and December, 2004. Follow-up completed between November, 2005 and May, 2008. Main Outcome Measures Performance on measures of spoken language comprehension and expression. Results Children undergoing CI showed greater growth in spoken language performance (10.4;[95% confidence interval: 9.6–11.2] points/year in comprehension; 8.4;[7.8–9.0] in expression) than would be predicted by their pre-CI baseline scores (5.4;[4.1–6.7] comprehension; 5.8;[4.6–7.0] expression). Although mean scores were not restored to age-appropriate levels after 3 years, significantly greater annual rates of language acquisition were observed in children who were younger at CI (1.1;[0.5–1.7] points in comprehension per year younger; 1.0;[0.6–1.5] in expression), and in children with shorter histories of hearing deficit (0.8;[0.2,1.2] points in comprehension per year shorter; 0.6;[0.2–1.0] for expression). In multivariable analyses, greater residual hearing prior to CI, higher ratings of parent-child interactions, and higher SES associated with greater rates of growth in comprehension and expression. Conclusions The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their pre-implantation scores. However

  17. Spoken language development in children following cochlear implantation.

    PubMed

    Niparko, John K; Tobey, Emily A; Thal, Donna J; Eisenberg, Laurie S; Wang, Nae-Yuh; Quittner, Alexandra L; Fink, Nancy E

    2010-04-21

    Cochlear implantation is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe to profound sensorineural hearing loss (SNHL). To prospectively assess spoken language acquisition following cochlear implantation in young children. Prospective, longitudinal, and multidimensional assessment of spoken language development over a 3-year period in children who underwent cochlear implantation before 5 years of age (n = 188) from 6 US centers and hearing children of similar ages (n = 97) from 2 preschools recruited between November 2002 and December 2004. Follow-up completed between November 2005 and May 2008. Performance on measures of spoken language comprehension and expression (Reynell Developmental Language Scales). Children undergoing cochlear implantation showed greater improvement in spoken language performance (10.4; 95% confidence interval [CI], 9.6-11.2 points per year in comprehension; 8.4; 95% CI, 7.8-9.0 in expression) than would be predicted by their preimplantation baseline scores (5.4; 95% CI, 4.1-6.7, comprehension; 5.8; 95% CI, 4.6-7.0, expression), although mean scores were not restored to age-appropriate levels after 3 years. Younger age at cochlear implantation was associated with significantly steeper rate increases in comprehension (1.1; 95% CI, 0.5-1.7 points per year younger) and expression (1.0; 95% CI, 0.6-1.5 points per year younger). Similarly, each 1-year shorter history of hearing deficit was associated with steeper rate increases in comprehension (0.8; 95% CI, 0.2-1.2 points per year shorter) and expression (0.6; 95% CI, 0.2-1.0 points per year shorter). In multivariable analyses, greater residual hearing prior to cochlear implantation, higher ratings of parent-child interactions, and higher socioeconomic status were associated with greater rates of improvement in comprehension and expression. The use of cochlear implants in young children was

  18. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language.

    PubMed

    Williams, Joshua T; Newman, Sharlene D

    2017-02-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.

  19. Direction asymmetries in spoken and signed language interpreting*

    PubMed Central

    NICODEMUS, BRENDA; EMMOREY, KAREN

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study (N=1,359) of both unimodal and bimodal interpreters that confirmed these preferences. The L1 to L2 direction preference was stronger for novice than expert bimodal interpreters, while novice and expert unimodal interpreters did not differ from each other. The results indicated that the different direction preferences for bimodal and unimodal interpreters cannot be explained by language production–comprehension asymmetries or by work or training experiences. We suggest that modality and language-specific features of signed languages drive the directionality preferences of bimodal interpreters. Specifically, we propose that fingerspelling, transcoding (literal word-for-word translation), self-monitoring, and consumers’ linguistic variation influence the preference of bimodal interpreters for working into their L2. PMID:23833563

  20. The Unified Phonetic Transcription for Teaching and Learning Chinese Languages

    ERIC Educational Resources Information Center

    Shieh, Jiann-Cherng

    2011-01-01

    In order to preserve distinctive cultures, people anxiously figure out writing systems of their languages as recording tools. Mandarin, Taiwanese and Hakka languages are three major and the most popular dialects of Han languages spoken in Chinese society. Their writing systems are all in Han characters. Various and independent phonetic…

  1. The Child's Path to Spoken Language.

    ERIC Educational Resources Information Center

    Locke, John L.

    A major synthesis of the latest research on early language acquisition, this book explores what gives infants the remarkable capacity to progress from babbling to meaningful sentences, and what inclines a child to speak. The book examines the neurological, perceptual, social, and linguistic aspects of language acquisition in young children, from…

  2. Korean: A Guide to the Spoken Language.

    ERIC Educational Resources Information Center

    Department of Defense, Washington, DC.

    This language guide, written for United States Armed Forces personnel, serves as an introduction to the Korean language and presents important words and phrases for use in normal conversation. Linguistic expressions are classified under the following categories: (1) greetings and general phrases, (2) location, (3) directions, (4) numbers, (5)…

  3. Prosodic Parallelism—Comparing Spoken and Written Language

    PubMed Central

    Wiese, Richard

    2016-01-01

    The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015), this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed. PMID:27807425

  4. Prosodic Parallelism-Comparing Spoken and Written Language.

    PubMed

    Wiese, Richard

    2016-01-01

    The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015), this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  5. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    ERIC Educational Resources Information Center

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  6. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    ERIC Educational Resources Information Center

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  7. Chinese Language Learning Motivation.

    ERIC Educational Resources Information Center

    Wen, Xiaohong

    A survey of 77 Asian and Asian-American university students enrolled in first- and second-year Chinese language courses investigated the students' motivations for studying the language and their expectations of what they will gain from studying it. Results indicate two factors accounting for beginning Chinese language study: interest in cultural…

  8. How long-term memory and accentuation interact during spoken language comprehension.

    PubMed

    Li, Xiaoqing; Yang, Yufang

    2013-04-01

    Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words).

  9. Accessing characters in spoken Chinese disyllables: An ERP study on the resolution of auditory ambiguity.

    PubMed

    Chen, Xuqian; Huang, Guoliang; Huang, Jian

    2016-01-01

    Chinese differs from most Indo-European languages in its phonological, lexical, and syntactic structures. One of its unique properties is the abundance of homophones at the monosyllabic/morphemic level, with the consequence that monosyllabic homophones are all ambiguous in speech perception. Two-morpheme Chinese words can be composed of two high homophone-density morphemes (HH words), two low homophone-density morphemes (LL words), or one high and one low homophone-density morphemes (LH or HL words). The assumption of a simple inhibitory homophone effect is called into question in the case of disyllabic spoken word recognition, in which the recognition of one morpheme is affected by semantic information given by the other. Event-related brain potentials (ERPs) were used to trace on-line competitions among morphemic homophones in accessing Chinese disyllables. Results showing significant differences in ERP amplitude when comparing LL and LH words, but not when comparing LL and HL words, suggested that the first morpheme cannot be accessed without feedback from the second morpheme. Most importantly, analyses of N400 amplitude among different densities showed a converse homophone effect in which LL words, rather than LH or HL words, triggered larger N400. These findings provide strong evidence of a dynamic integration system at work during spoken Chinese disyllable recognition.

  10. Spanish. A Guide to the Spoken Language.

    ERIC Educational Resources Information Center

    1975

    This pocket-sized guide is designed to enable the learner to carry on simple conversations in Spanish. It is not intended to give a complete command of the Spanish language. The first section, "Useful Words and Phrases," includes these topics: Greetings and General Phrases, Location (e.g. Where is the restaurant?), Directions (e.g. to the right),…

  11. Le Francais parle. Etudes sociolinguistiques (Spoken French. Sociolinguistic Studies). Current Inquiry into Languages and Linguistics 30.

    ERIC Educational Resources Information Center

    Thibault, Pierrette

    This volume contains twelve articles dealing with the French language as spoken in Quebec. The following topics are addressed: (1) language change and variation; (2) coordinating expressions in the French spoken in Montreal; (3) expressive language as source of language change; (4) the role of correction in conversation; (5) social change and…

  12. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  13. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  14. The Impact of Biculturalism on Language and Literacy Development: Teaching Chinese English Language Learners

    ERIC Educational Resources Information Center

    Palmer, Barbara C.; Chen, Chia-I; Chang, Sara; Leclere, Judith T.

    2006-01-01

    According to the 2000 United States Census, Americans age five and older who speak a language other than English at home grew 47 percent over the preceding decade. This group accounts for slightly less than one in five Americans (17.9%). Among the minority languages spoken in the United States, Asian-language speakers, including Chinese and other…

  15. "Authenticity" in Language Testing: Evaluating Spoken Language Tests for International Teaching Assistants.

    ERIC Educational Resources Information Center

    Hoekje, Barbara; Linnell, Kimberly

    1994-01-01

    Bachman's framework of language testing and standard of authenticity for language testing instruments were used to evaluate three instruments--the SPEAK (Spoken Proficiency English Assessment Kit) test, OPI (Oral Proficiency Interview), and a performance test--as language tests for nonnative-English-speaking teaching assistants. (Contains 53…

  16. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    PubMed Central

    Geers, Ann E.; Nicholas, Johanna G.

    2013-01-01

    Purpose To determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12–38 months of age. Relative advantages of receiving a bilateral CI after age 4.5, better pre-CI aided hearing, and longer CI experience were also examined. Method Sixty children participated in a prospective longitudinal study of outcomes at 4.5 and 10.5 years of age. Twenty-nine children received a sequential second CI. Test scores were compared to normative samples of hearing age-mates and predictors of outcomes identified. Results Standard scores on language tests at 10.5 years of age remained significantly correlated with age of first cochlear implantation. Scores were not associated with receipt of a second, sequentially-acquired CI. Significantly higher scores were achieved for vocabulary as compared with overall language, a finding not evident when the children were tested at younger ages. Conclusion Age-appropriate spoken language skills continued to be more likely with younger AOI, even after an average of 8.6 years of additional CI use. Receipt of a second implant between ages 4–10 years and longer duration of device use did not provide significant added benefit. PMID:23275406

  17. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  18. The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing

    ERIC Educational Resources Information Center

    Gow, David W., Jr.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…

  19. The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing

    ERIC Educational Resources Information Center

    Gow, David W., Jr.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…

  20. Bimodal Bilinguals Co-activate Both Languages during Spoken Comprehension

    PubMed Central

    Shook, Anthony; Marian, Viorica

    2012-01-01

    Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals’ and English monolinguals’ eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing items than at phonologically unrelated items, and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension. PMID:22770677

  1. Development of brain networks involved in spoken word processing of Mandarin Chinese

    PubMed Central

    Cao, Fan; Khaild, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J.; Booth, James R.

    2010-01-01

    Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on task. There were developmental increases in left inferior temporal gyrus and right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. PMID:20884355

  2. How reading acquisition changes children's spoken language network.

    PubMed

    Monzalvo, Karla; Dehaene-Lambertz, Ghislaine

    2013-12-01

    To examine the influence of age and reading proficiency on the development of the spoken language network, we tested 6- and 9-years-old children listening to native and foreign sentences in a slow event-related fMRI paradigm. We observed a stable organization of the peri-sylvian areas during this time period with a left dominance in the superior temporal sulcus and inferior frontal region. A year of reading instruction was nevertheless sufficient to increase activation in regions involved in phonological representations (posterior superior temporal region) and sentence integration (temporal pole and pars orbitalis). A top-down activation of the left inferior temporal cortex surrounding the visual word form area, was also observed but only in 9year-olds (3years of reading practice) listening to their native language. These results emphasize how a successful cultural practice, reading, slots in the biological constraints of the innate spoken language network. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    ERIC Educational Resources Information Center

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  4. Windows into Sensory Integration and Rates in Language Processing: Insights from Signed and Spoken Languages

    ERIC Educational Resources Information Center

    Hwang, So-One K.

    2011-01-01

    This dissertation explores the hypothesis that language processing proceeds in "windows" that correspond to representational units, where sensory signals are integrated according to time-scales that correspond to the rate of the input. To investigate universal mechanisms, a comparison of signed and spoken languages is necessary. Underlying the…

  5. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    ERIC Educational Resources Information Center

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  6. Spoken Language Production in Young Adults: Examining Syntactic Complexity.

    PubMed

    Nippold, Marilyn A; Frantz-Kaspar, Megan W; Vigeland, Laura M

    2017-05-24

    In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language impairment. Forty adults (mean age = 22 years, 10 months) with typical language development participated in an interview that consisted of 3 speaking tasks: a general conversation about common, everyday topics; a narrative retelling task that involved fables; and a question-and-answer, critical-thinking task about the fables. Each speaker's interview was audio-recorded, transcribed, broken into communication units, coded for main and subordinate clauses, entered into Systematic Analysis of Language Transcripts (Miller, Iglesias, & Nockerts, 2004), and analyzed for mean length of communication unit and clausal density. Both the narrative and critical-thinking tasks elicited significantly greater syntactic complexity than the conversational task. It was also found that syntactic complexity was significantly greater during the narrative task than the critical-thinking task. Syntactic complexity was best revealed by a narrative task that involved fables. The study offers benchmarks for language development during early adulthood.

  7. Native Language Spoken as a Risk Marker for Tooth Decay.

    PubMed

    Carson, J; Walker, L A; Sanders, B J; Jones, J E; Weddell, J A; Tomlin, A M

    2015-01-01

    The purpose of this study was to assess dmft, the number of decayed, missing (due to caries), and/ or filled primary teeth, of English-speaking and non-English speaking patients of a hospital based pediatric dental clinic under the age of 72 months to determine if native language is a risk marker for tooth decay. Records from an outpatient dental clinic which met the inclusion criteria were reviewed. Patient demographics and dmft score were recorded, and the patients were separated into three groups by the native language spoken by their parents: English, Spanish and all other languages. A total of 419 charts were assessed: 253 English-speaking, 126 Spanish-speaking, and 40 other native languages. After accounting for patient characteristics, dmft was significantly higher for the other language group than for the English-speaking (p<0.001) and Spanish-speaking groups (p<0.05), however the English-speaking and Spanish-speaking groups were not different from each other (p>0.05). Those patients under 72 months of age whose parents' native language is not English or Spanish, have the highest risk for increased dmft when compared to English and Spanish speaking patients. Providers should consider taking additional time to educate patients and their parents, in their native language, on the importance of routine dental care and oral hygiene.

  8. Spoken English Language Development Among Native Signing Children With Cochlear Implants

    PubMed Central

    Davidson, Kathryn

    2014-01-01

    Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken English after implantation. We compare their language skills with hearing ASL/English bilingual children of deaf parents. Our results show comparable English scores for the CI and hearing groups on a variety of standardized language measures, exceeding previously reported scores for children with CIs with the same age of implantation and years of CI use. We conclude that natural sign language input does no harm and may mitigate negative effects of early auditory deprivation for spoken language development. PMID:24150489

  9. Splenium development and early spoken language in human infants.

    PubMed

    Swanson, Meghan R; Wolff, Jason J; Elison, Jed T; Gu, Hongbin; Hazlett, Heather C; Botteron, Kelly; Styner, Martin; Paterson, Sarah; Gerig, Guido; Constantino, John; Dager, Stephen; Estes, Annette; Vachet, Clement; Piven, Joseph

    2017-03-01

    The association between developmental trajectories of language-related white matter fiber pathways from 6 to 24 months of age and individual differences in language production at 24 months of age was investigated. The splenium of the corpus callosum, a fiber pathway projecting through the posterior hub of the default mode network to occipital visual areas, was examined as well as pathways implicated in language function in the mature brain, including the arcuate fasciculi, uncinate fasciculi, and inferior longitudinal fasciculi. The hypothesis that the development of neural circuitry supporting domain-general orienting skills would relate to later language performance was tested in a large sample of typically developing infants. The present study included 77 infants with diffusion weighted MRI scans at 6, 12 and 24 months and language assessment at 24 months. The rate of change in splenium development varied significantly as a function of language production, such that children with greater change in fractional anisotropy (FA) from 6 to 24 months produced more words at 24 months. Contrary to findings from older children and adults, significant associations between language production and FA in the arcuate, uncinate, or left inferior longitudinal fasciculi were not observed. The current study highlights the importance of tracing brain development trajectories from infancy to fully elucidate emerging brain-behavior associations while also emphasizing the role of the splenium as a key node in the structural network that supports the acquisition of spoken language. © 2015 John Wiley & Sons Ltd.

  10. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech.

    PubMed

    Yip, Michael C W

    2016-04-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese word may be most likely to cue native Cantonese listeners the locations of possible word boundaries in speech. The observed results from the two word-spotting experiments confirmed this prediction. Together with other relevant studies, we suggest that phonotactics constraint is one of the useful sources of information in spoken word recognition processes of Chinese words in speech.

  11. Development of Mandarin spoken language after pediatric cochlear implantation.

    PubMed

    Li, Bei; Soli, Sigfrid D; Zheng, Yun; Li, Gang; Meng, Zhaoli

    2014-07-01

    The purpose of this study was to evaluate early spoken language development in young Mandarin-speaking children during the first 24 months after cochlear implantation, as measured by receptive and expressive vocabulary growth rates. Growth rates were compared with those of normally hearing children and with growth rates for English-speaking children with cochlear implants. Receptive and expressive vocabularies were measured with the simplified short form (SSF) version of the Mandarin Communicative Development Inventory (MCDI) in a sample of 112 pediatric implant recipients at baseline, 3, 6, 12, and 24 months after implantation. Implant ages ranged from 1 to 5 years. Scores were expressed in terms of normal equivalent ages, allowing normalized vocabulary growth rates to be determined. Scores for English-speaking children were re-expressed in these terms, allowing direct comparisons of Mandarin and English early spoken language development. Vocabulary growth rates during the first 12 months after implantation were similar to those for normally hearing children less than 16 months of age. Comparisons with growth rates for normally hearing children 16-30 months of age showed that the youngest implant age group (1-2 years) had an average growth rate of 0.68 that of normally hearing children; while the middle implant age group (2-3 years) had an average growth rate of 0.65; and the oldest implant age group (>3 years) had an average growth rate of 0.56, significantly less than the other two rates. Growth rates for English-speaking children with cochlear implants were 0.68 in the youngest group, 0.54 in the middle group, and 0.57 in the oldest group. Growth rates in the middle implant age groups for the two languages differed significantly. The SSF version of the MCDI is suitable for assessment of Mandarin language development during the first 24 months after cochlear implantation. Effects of implant age and duration of implantation can be compared directly across

  12. Pupillometry reveals processing load during spoken language comprehension.

    PubMed

    Engelhardt, Paul E; Ferreira, Fernanda; Patsenko, Elena G

    2010-04-01

    This study investigated processing effort by measuring peoples' pupil diameter as they listened to sentences containing a temporary syntactic ambiguity. In the first experiment, we manipulated prosody. The results showed that when prosodic structure conflicted with syntactic structure, pupil diameter reliably increased. In the second experiment, we manipulated both prosody and visual context. The results showed that when visual context was consistent with the correct interpretation, prosody had very little effect on processing effort. However, when visual context was inconsistent with the correct interpretation, prosody had a large effect on processing effort. The interaction between visual context and prosody shows that visual context has an effect on online processing and that it can modulate the influence of linguistic sources of information, such as prosody. Pupillometry is a sensitive measure of processing effort during spoken language comprehension.

  13. Encoding lexical tones in jTRACE: a simulation of monosyllabic spoken word recognition in Mandarin Chinese.

    PubMed

    Shuai, Lan; Malins, Jeffrey G

    2017-02-01

    Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.

  14. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  15. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  16. Eye Movements and Spoken Language Comprehension: Effects of Visual Context on Syntactic Ambiguity Resolution

    ERIC Educational Resources Information Center

    Spivey, Michael J.; Tanenhaus, Michael K.; Eberhard, Kathleen M.; Sedivy, Julie C.

    2002-01-01

    When participants follow spoken instructions to pick up and move objects in a visual workspace, their eye movements to the objects are closely time-locked to referential expressions in the instructions. Two experiments used this methodology to investigate the processing of the temporary ambiguities that arise because spoken language unfolds over…

  17. Expressive Spoken Language Development in Deaf Children with Cochlear Implants Who Are Beginning Formal Education

    ERIC Educational Resources Information Center

    Inscoe, Jayne Ramirez; Odell, Amanda; Archbold, Susan; Nikolopoulos, Thomas

    2009-01-01

    This paper assesses the expressive spoken grammar skills of young deaf children using cochlear implants who are beginning formal education, compares it with that achieved by normally hearing children and considers possible implications for educational management. Spoken language grammar was assessed, three years after implantation, in 45 children…

  18. Redundancy in Foreign Language Reading Comprehension Instruction: Concurrent Written and Spoken Presentations

    ERIC Educational Resources Information Center

    Diao, Yali; Sweller, John

    2007-01-01

    In an example of the redundancy effect, learning is inhibited when written and spoken text containing the same information is presented simultaneously rather than in written or spoken form alone. The current research was designed to investigate whether the redundancy effect applied to reading comprehension in English as a foreign language (EFL) by…

  19. User-Centred Design for Chinese-Oriented Spoken English Learning System

    ERIC Educational Resources Information Center

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  20. User-Centred Design for Chinese-Oriented Spoken English Learning System

    ERIC Educational Resources Information Center

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  1. Top Languages Spoken by English Language Learners Nationally and by State. ELL Information Center Fact Sheet Series. No. 3

    ERIC Educational Resources Information Center

    Batalova, Jeanne; McHugh, Margie

    2010-01-01

    While English Language Learner (ELL) students in the United States speak more than 150 languages, Spanish is by far the most common home or first language, but is not the top language spoken by ELLs in every state. This fact sheet, based on analysis of the U.S. Census Bureau's 2009 American Community Survey, documents the top languages spoken…

  2. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    NASA Astrophysics Data System (ADS)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  3. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    PubMed

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  4. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    PubMed Central

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  5. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    PubMed

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  6. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  7. Profound deafness and the acquisition of spoken language in children.

    PubMed

    Vlastarakos, Petros V

    2012-12-08

    Profound congenital sensorineural hearing loss (SNHL) is not so infrequent, affecting 1 to 2 of every 1000 newborns in western countries. Nevertheless, universal hearing screening programs have not been widely applied, although such programs are already established for metabolic diseases. The acquisition of spoken language is a time-dependent process, and some form linguistic input should be present before the first 6 mo of life for a child to become linguistically competent. Therefore, profoundly deaf children should be detected early, and referred timely for the process of auditory rehabilitation to be initiated. Hearing assessment methods should reflect the behavioural audiogram in an accurate manner. Additional disabilities also need to be taken into account. Profound congenital SNHL is managed by a multidisciplinary team. Affected infants should be bilaterally fitted with hearing aids, no later than 3 mo after birth. They should be monitored until the first year of age. If they are not progressing linguistically, cochlear implantation can be considered after thorough preoperative assessment. Prelingually deaf children develop significant speech perception and production abilities, and speech intelligibility over time, following cochlear implantation. Age at intervention and oral communication, are the most important determinants of outcomes. Realistic parental expectations are also essential. Cochlear implant programs deserve the strong support of community members, professional bodies, and political authorities in order to be successful, and maximize the future earnings of pediatric cochlear implantation for human societies.

  8. Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages

    PubMed Central

    Perniss, Pamela; Thompson, Robin L.; Vigliocco, Gabriella

    2010-01-01

    Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor, perceptual, and affective experience. PMID:21833282

  9. Iconicity as a general property of language: evidence from spoken and signed languages.

    PubMed

    Perniss, Pamela; Thompson, Robin L; Vigliocco, Gabriella

    2010-01-01

    Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to "hook up" to motor, perceptual, and affective experience.

  10. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  11. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  12. Comprehension of Spoken, Written and Signed Sentences in Childhood Language Disorders.

    ERIC Educational Resources Information Center

    Bishop, D. V. M.

    1982-01-01

    Nine children suffering from Landau-Kleffner (L-K) syndrome and 25 children with developmental expressive disorders were tested for comprehension of English grammatical structures in spoken, written, and signed language modalities. L-K children demonstrated comprehension problems in all three language modalities and tended to treat language as…

  13. Spoken Word Recognition of Chinese Words in Continuous Speech

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  14. A Spoken Access Approach for Chinese Text and Speech Information Retrieval.

    ERIC Educational Resources Information Center

    Chien, Lee-Feng; Wang, Hsin-Min; Bai, Bo-Ren; Lin, Sun-Chein

    2000-01-01

    Presents an efficient spoken-access approach for both Chinese text and Mandarin speech information retrieval. Highlights include human-computer interaction via voice input, speech query recognition at the syllable level, automatic term suggestion, relevance feedback techniques, and experiments that show an improvement in the effectiveness of…

  15. Horizontal Flow of Semantic and Phonological Information in Chinese Spoken Sentence Production

    ERIC Educational Resources Information Center

    Yang, Jin-Chen; Yang, Yu-Fang

    2008-01-01

    A variant of the picture--word interference paradigm was used in three experiments to investigate the horizontal information flow of semantic and phonological information between nouns in spoken Mandarin Chinese sentences. Experiment 1 demonstrated that there is a semantic interference effect when the word in the second phrase (N3) and the first…

  16. Horizontal Flow of Semantic and Phonological Information in Chinese Spoken Sentence Production

    ERIC Educational Resources Information Center

    Yang, Jin-Chen; Yang, Yu-Fang

    2008-01-01

    A variant of the picture--word interference paradigm was used in three experiments to investigate the horizontal information flow of semantic and phonological information between nouns in spoken Mandarin Chinese sentences. Experiment 1 demonstrated that there is a semantic interference effect when the word in the second phrase (N3) and the first…

  17. Test Anxiety Analysis of Chinese College Students in Computer-Based Spoken English Test

    ERIC Educational Resources Information Center

    Yanxia, Yang

    2017-01-01

    Test anxiety was a commonly known or assumed factor that could greatly influence performance of test takers. With the employment of designed questionnaires and computer-based spoken English test, this paper explored test anxiety manifestation of Chinese college students from both macro and micro aspects, and found out that the major anxiety in…

  18. A Critique of Mark D. Allen's "The Preservation of Verb Subcategory Knowledge in a Spoken Language Comprehension Deficit"

    ERIC Educational Resources Information Center

    Kemmerer, David

    2008-01-01

    Allen [Allen, M. (2005). "The preservation of verb subcategory knowledge in a spoken language comprehension deficit." "Brain and Language, 95", 255-264.] reports a single patient, WBN, who, during spoken language comprehension, is still able to access some of the syntactic properties of verbs despite being unable to access some of their semantic…

  19. A Critique of Mark D. Allen's "The Preservation of Verb Subcategory Knowledge in a Spoken Language Comprehension Deficit"

    ERIC Educational Resources Information Center

    Kemmerer, David

    2008-01-01

    Allen [Allen, M. (2005). "The preservation of verb subcategory knowledge in a spoken language comprehension deficit." "Brain and Language, 95", 255-264.] reports a single patient, WBN, who, during spoken language comprehension, is still able to access some of the syntactic properties of verbs despite being unable to access some of their semantic…

  20. Neural stages of spoken, written, and signed word processing in beginning second language learners

    PubMed Central

    Leonard, Matthew K.; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I.; Halgren, Eric

    2013-01-01

    We combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine how sensory modality, language type, and language proficiency interact during two fundamental stages of word processing: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language. PMID:23847496

  1. Neural stages of spoken, written, and signed word processing in beginning second language learners.

    PubMed

    Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric

    2013-01-01

    WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.

  2. Attentional Capture of Objects Referred to by Spoken Language

    PubMed Central

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention. PMID:21517215

  3. Expository Writing in Chinese. International Studies, East Asian Language Texts, No. 5.

    ERIC Educational Resources Information Center

    McMahon, Keith; And Others

    This text is intended for use by advanced students of the Chinese language to learn to write at the college level in modern Chinese. The first ten lessons teach how to progress from the spoken structures to their contemporary written forms. Each lesson contains a text with a familiar form, notes on grammatical structures, and exercises. The text…

  4. Effects of Time Lag in the Introduction of Characters into the Chinese Language Curriculum.

    ERIC Educational Resources Information Center

    Packard, Jerome L.

    1990-01-01

    Examines the effects of a time lag before introducing Chinese characters into the elementary Chinese language curriculum. Results found that college students who were provided a three-week time lag were better able to discriminate phonetically and transcribe unfamiliar Mandarin syllables and were more fluent in spoken Mandarin than students who…

  5. Preferred spoken language mediates differences in neuraxial labor analgesia utilization among racial and ethnic groups.

    PubMed

    Caballero, J A; Butwick, A J; Carvalho, B; Riley, E T

    2014-05-01

    The aims of this study were to assess racial/ethnic disparities for neuraxial labor analgesia utilization and to determine if preferred spoken language mediates the association between race/ethnicity and neuraxial labor analgesia utilization. We performed a retrospective cohort study of 3129 obstetric patients who underwent vaginal delivery at a tertiary care obstetric center. Bivariate analyses and multivariate logistic regression models were used to assess the relationships between race/ethnicity, preferred spoken language and neuraxial labor analgesia. Hispanic ethnicity (adjusted OR 0.77, 95% CI 0.61-0.98) and multiparity (adjusted OR 0.59, 95% CI 0.51-0.69) were independently associated with a reduced likelihood of neuraxial labor analgesia utilization. When preferred spoken language was controlled for, the effect of Hispanic ethnicity was no longer significant (adjusted OR 0.84, 95% CI 0.66-1.08) and only non-English preferred spoken language (adjusted OR 0.82, 95% CI 0.67-0.99) and multiparity (adjusted OR 0.59, 95% CI 0.51-0.69) were associated with a reduced likelihood of neuraxial labor analgesia utilization. This study provides evidence that preferred spoken language mediates the relationship between Hispanic ethnicity and neuraxial labor analgesia utilization. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Parent Telegraphic Speech Use and Spoken Language in Preschoolers with ASD

    ERIC Educational Resources Information Center

    Venker, Courtney E.; Bolt, Daniel M.; Meyer, Allison; Sindberg, Heidi; Weismer, Susan Ellis; Tager-Flusberg, Helen

    2015-01-01

    Purpose: There is considerable controversy regarding whether to use telegraphic or grammatical input when speaking to young children with language delays, including children with autism spectrum disorder (ASD). This study examined telegraphic speech use in parents of preschoolers with ASD and associations with children's spoken language 1 year…

  7. The Spoken Language of Teachers and Pupils in the Education of Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Huntington, Alan; Watton, Faval

    1986-01-01

    Spoken language of 24 teachers and 131 hearing impaired students (6, 10, and 14-year levels) were analyzed for sentence length and complexity. Results revealed that the oral-alone (OA) teachers in OA institutions created richer language environments and helped children display relatively enhanced oral linguistic growth compared to laissez faire…

  8. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    ERIC Educational Resources Information Center

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  9. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    ERIC Educational Resources Information Center

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  10. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    ERIC Educational Resources Information Center

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  11. Spoken Language Benefits of Extending Cochlear Implant Candidacy Below 12 Months of Age

    PubMed Central

    Nicholas, Johanna G.; Geers, Ann E.

    2013-01-01

    Objective To test the hypothesis that cochlear implantation surgery before 12 months of age yields better spoken language results than surgery between 12–18 months of age. Study Design Language testing administered to children at 4.5 years of age (± 2 months). Setting Schools, speech-language therapy offices, and cochlear implant (CI) centers in the US and Canada. Participants 69 children who received a cochlear implant between ages 6–18 months of age. All children were learning to communicate via listening and spoken language in English-speaking families. Main Outcome Measure Standard scores on receptive vocabulary, expressive and receptive language (includes grammar). Results Children with CI surgery at 6–11 months (N=27) achieved higher scores on all measures as compared to those with surgery at 12–18 months (N=42). Regression analysis revealed a linear relationship between age of implantation and language outcomes throughout the 6–18 month surgery-age range. Conclusion For children in intervention programs emphasizing listening and spoken language, cochlear implantation before 12 months of age appears to provide a significant advantage for spoken language achievement observed at 4.5 years of age. PMID:23478647

  12. Parent Telegraphic Speech Use and Spoken Language in Preschoolers with ASD

    ERIC Educational Resources Information Center

    Venker, Courtney E.; Bolt, Daniel M.; Meyer, Allison; Sindberg, Heidi; Weismer, Susan Ellis; Tager-Flusberg, Helen

    2015-01-01

    Purpose: There is considerable controversy regarding whether to use telegraphic or grammatical input when speaking to young children with language delays, including children with autism spectrum disorder (ASD). This study examined telegraphic speech use in parents of preschoolers with ASD and associations with children's spoken language 1 year…

  13. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    ERIC Educational Resources Information Center

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  14. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    PubMed

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Narrative spoken language skills in severely hearing impaired school-aged children with cochlear implants.

    PubMed

    Boons, Tinne; De Raeve, Leo; Langereis, Margreet; Peeraer, Louis; Wouters, Jan; van Wieringen, Astrid

    2013-11-01

    Cochlear implants have a significant positive effect on spoken language development in severely hearing impaired children. Previous work in this population has focused mostly on the emergence of early-developing language skills, such as vocabulary. The current study aims at comparing narratives, which are more complex and later-developing spoken language skills, of a contemporary group of profoundly deaf school-aged children using cochlear implants (n=66, median age=8 years 3 months) with matched normal hearing peers. Results show that children with cochlear implants demonstrate good results on quantity and coherence of the utterances, but problematic outcomes on quality, content and efficiency of retold stories. However, for a subgroup (n=20, median age=8 years 1 month) of deaf children without additional disabilities who receive cochlear implantation before the age of 2 years, use two implants, and are raised with one spoken language, age-adequate spoken narrative skills at school-age are feasible. This is the first study to set the goals regarding spoken narrative skills for deaf children using cochlear implants. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    PubMed

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing.

  17. AG Bell Academy Certification Program for Listening and Spoken Language Specialists: Meeting a World-Wide Need for Qualified Professionals

    ERIC Educational Resources Information Center

    Goldberg, Donald M.; Dickson, Cheryl L.; Flexer, Carol

    2010-01-01

    This article discusses the AG Bell Academy for Listening and Spoken Language--an organization designed to build capacity of certified Listening and Spoken Language Specialists (LSLS) by defining and maintaining a set of professional standards for LSLS professionals and thereby addressing the global deficit of qualified LSLS. Definitions and…

  18. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  19. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    PubMed

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. L'Enonce Toura-Cote d'Ivoire (The Spoken Language of Toura-Ivory Coast).

    ERIC Educational Resources Information Center

    Bearth, Thomas

    The spoken language of Toura, a language spoken by nearly 20,000 inhabitants of a mountainous region situated in the north of Man, the administrative center of the West Ivory Coast, is systematically analyzed in this linguistic study. Sixteen major chapters include: (1) grammatical generalizations, (2) phonemic unities, (3) classification of…

  1. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  2. Factors Influencing Verbal Intelligence and Spoken Language in Children with Phenylketonuria.

    PubMed

    Soleymani, Zahra; Keramati, Nasrin; Rohani, Farzaneh; Jalaei, Shohre

    2015-05-01

    To determine verbal intelligence and spoken language of children with phenylketonuria and to study the effect of age at diagnosis and phenylalanine plasma level on these abilities. Cross-sectional. Children with phenylketonuria were recruited from pediatric hospitals in 2012. Normal control subjects were recruited from kindergartens in Tehran. 30 phenylketonuria and 42 control subjects aged 4-6.5 years. Skills were compared between 3 phenylketonuria groups categorized by age at diagnosis/treatment, and between the phenylketonuria and control groups. Scores on Wechsler Preschool and Primary Scale of Intelligence for verbal and total intelligence, and Test of Language Development-Primary, third edition for spoken language, listening, speaking, semantics, syntax, and organization. The performance of control subjects was significantly better than that of early-treated subjects for all composite quotients from Test of Language Development and verbal intelligence (P<0.001). Early-treated subjects scored significantly higher than the two groups of late-treated subjects for spoken language (P=0.01), speaking (P=0.04), syntax (P=0.02), and verbal intelligence (P=0.019). There was a negative correlation between phenylalanine level and verbal intelligence (r=-0.79) in early-treated subjects and between phenylalanine level and spoken language (r=-0.71), organization (r=-0.82) and semantics (r=-0.82) for late-treated subjects diagnosed before the age one year. The study confirmed that diagnosis of newborns and control of blood phenylalanine concentration improves verbal intelligence and spoken language scores in phenylketonuria subjects.

  3. The development of spoken language in deaf children: explaining the unexplained variance.

    PubMed

    Musselman, C; Kircaali-Iftar, G

    1996-01-01

    Using a large existing data base on children with severe and profound deafness, 10 children were identified whose level of spoken laguage was most above and 10 whose level was most below that expected on the basis of their hearing loss, age, and intelligence. A study of their personal characteristics, family background, and educational history identified factors associated with unusually high performance; these includes earlier use of binaural ear-level aids, more highly educated mothers, auditory/verbal or auditory/oral instruction, reliance on spoken language as a method of communication, individualized instruction, integration, and structured teaching by parents. Parents of high performers also reported being highly commited to and focusing family resouces on developing their child's spoken language.

  4. Bimodal Bilinguals Co-Activate Both Languages during Spoken Comprehension

    ERIC Educational Resources Information Center

    Shook, Anthony; Marian, Viorica

    2012-01-01

    Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are…

  5. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension.

    PubMed

    Sekerina, Irina A; Campanelli, Luca; Van Dyke, Julie A

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals.

  6. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension

    PubMed Central

    Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  7. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    ERIC Educational Resources Information Center

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  8. Bidialectal African American Adolescents' Beliefs about Spoken Language Expectations in English Classrooms

    ERIC Educational Resources Information Center

    Godley, Amanda; Escher, Allison

    2012-01-01

    This article describes the perspectives of bidialectal African American adolescents--adolescents who speak both African American Vernacular English (AAVE) and Standard English--on spoken language expectations in their English classes. Previous research has demonstrated that many teachers hold negative views of AAVE, but existing scholarship has…

  9. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    ERIC Educational Resources Information Center

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  10. Cross-Language Perception of Cantonese Vowels Spoken by Native and Non-Native Speakers

    ERIC Educational Resources Information Center

    So, Connie K.; Attina, Virginie

    2014-01-01

    This study examined the effect of native language background on listeners' perception of native and non-native vowels spoken by native (Hong Kong Cantonese) and non-native (Mandarin and Australian English) speakers. They completed discrimination and an identification task with and without visual cues in clear and noisy conditions. Results…

  11. Effects of Tasks on Spoken Interaction and Motivation in English Language Learners

    ERIC Educational Resources Information Center

    Carrero Pérez, Nubia Patricia

    2016-01-01

    Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…

  12. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    ERIC Educational Resources Information Center

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  13. The Contribution of the Inferior Parietal Cortex to Spoken Language Production

    ERIC Educational Resources Information Center

    Geranmayeh, Fatemeh; Brownsett, Sonia L. E.; Leech, Robert; Beckmann, Christian F.; Woodhead, Zoe; Wise, Richard J. S.

    2012-01-01

    This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with…

  14. Using Unscripted Spoken Texts in the Teaching of Second Language Listening

    ERIC Educational Resources Information Center

    Wagner, Elvis

    2014-01-01

    Most spoken texts that are used in second language (L2) listening classroom activities are scripted texts, where the text is written, revised, polished, and then read aloud with artificially clear enunciation and slow rate of speech. This article explores the field's overreliance on these scripted texts, at the expense of including unscripted…

  15. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    ERIC Educational Resources Information Center

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  16. Beyond Rhyme or Reason: ERPs Reveal Task-Specific Activation of Orthography on Spoken Language

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Ziegler, Johannes C.

    2011-01-01

    Metaphonological tasks, such as rhyme judgment, have been the primary tool for the investigation of the effects of orthographic knowledge on spoken language. However, it has been recently argued that the orthography effect in rhyme judgment does not reflect the automatic activation of orthographic codes but rather stems from sophisticated response…

  17. Acquisition of graphic communication by a young girl without comprehension of spoken language.

    PubMed

    von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R

    To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.

  18. Using Unscripted Spoken Texts in the Teaching of Second Language Listening

    ERIC Educational Resources Information Center

    Wagner, Elvis

    2014-01-01

    Most spoken texts that are used in second language (L2) listening classroom activities are scripted texts, where the text is written, revised, polished, and then read aloud with artificially clear enunciation and slow rate of speech. This article explores the field's overreliance on these scripted texts, at the expense of including unscripted…

  19. The Representation of Spoken Language in Early Reading Books: Problems for L2 Learner Readers.

    ERIC Educational Resources Information Center

    Wallace, Catherine

    Some of the difficulties faced by second language learners who are continuing to acquire English at the same time as they start to read simple extended English texts are illustrated. Specific focus is on the question of how writers of early reading material can best help such learners to understand the relationship between spoken and written…

  20. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    ERIC Educational Resources Information Center

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  1. On-Line Syntax: Thoughts on the Temporality of Spoken Language

    ERIC Educational Resources Information Center

    Auer, Peter

    2009-01-01

    One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…

  2. Parental Reports of Spoken Language Skills in Children with Down Syndrome.

    ERIC Educational Resources Information Center

    Berglund, Eva; Eriksson, Marten; Johansson, Irene

    2001-01-01

    Spoken language in 330 children with Down syndrome (ages 1-5) and 336 normally developing children (ages 1,2) was compared. Growth trends, individual variation, sex differences, and performance on vocabulary, pragmatic, and grammar scales as well as maximum length of utterance were explored. Three- and four-year-old Down syndrome children…

  3. The Contribution of the Inferior Parietal Cortex to Spoken Language Production

    ERIC Educational Resources Information Center

    Geranmayeh, Fatemeh; Brownsett, Sonia L. E.; Leech, Robert; Beckmann, Christian F.; Woodhead, Zoe; Wise, Richard J. S.

    2012-01-01

    This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with…

  4. Cross-Language Perception of Cantonese Vowels Spoken by Native and Non-Native Speakers

    ERIC Educational Resources Information Center

    So, Connie K.; Attina, Virginie

    2014-01-01

    This study examined the effect of native language background on listeners' perception of native and non-native vowels spoken by native (Hong Kong Cantonese) and non-native (Mandarin and Australian English) speakers. They completed discrimination and an identification task with and without visual cues in clear and noisy conditions. Results…

  5. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    ERIC Educational Resources Information Center

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  6. Comparing Spoken Language Treatments for Minimally Verbal Preschoolers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-01-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…

  7. Professional Training in Listening and Spoken Language--A Canadian Perspective

    ERIC Educational Resources Information Center

    Fitzpatrick, Elizabeth

    2010-01-01

    Several factors undoubtedly influenced the development of listening and spoken language options for children with hearing loss in Canada. The concept of providing auditory-based rehabilitation was popularized in Canada in the 1960s through the work of Drs. Daniel Ling and Agnes Ling in Montreal. The Lings founded the McGill University Project for…

  8. Bidialectal African American Adolescents' Beliefs about Spoken Language Expectations in English Classrooms

    ERIC Educational Resources Information Center

    Godley, Amanda; Escher, Allison

    2012-01-01

    This article describes the perspectives of bidialectal African American adolescents--adolescents who speak both African American Vernacular English (AAVE) and Standard English--on spoken language expectations in their English classes. Previous research has demonstrated that many teachers hold negative views of AAVE, but existing scholarship has…

  9. Comparing Spoken Language Treatments for Minimally Verbal Preschoolers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-01-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…

  10. Contribution of the basal ganglia to spoken language: is speech production like the other motor skills?

    PubMed

    Zenon, Alexandre; Olivier, Etienne

    2014-12-01

    Two of the roles assigned to the basal ganglia in spoken language parallel very well their contribution to motor behaviour: (1) their role in sequence processing, resulting in syntax deficits, and (2) their role in movement "vigor," leading to "hypokinetic dysarthria" or "hypophonia." This is an additional example of how the motor system has served the emergence of high-level cognitive functions, such as language.

  11. Teaching Chinese as Tomorrow's Language

    ERIC Educational Resources Information Center

    Chmelynski, Carol

    2006-01-01

    Relatively few public school students are currently learning the Chinese language, but experts predict the number of K-12 schools offering such instruction will soon soar. With China poised to become the next global economic superpower, policymakers say it is essential that American schools expand their Chinese studies. Here, the author discusses…

  12. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    PubMed Central

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  13. Spoken Soul: The Language of Black Imagination and Reality

    ERIC Educational Resources Information Center

    Sealey-Ruiz, Yolanda

    2005-01-01

    Despite American schools administrators' refusal to accept the language of African-American students and their overzealousness to frame language and literacy skills in terms of an "achievement gap," African-American Vernacular English (AAVE) is the language of African-American imagination and reality. This article discusses the characteristics of…

  14. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    PubMed

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  15. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol

    PubMed Central

    2013-01-01

    Background Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child’s life. Methods/Design This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible

  16. Retinoic Acid Signaling: A New Piece in the Spoken Language Puzzle

    PubMed Central

    van Rhijn, Jon-Ruben; Vernes, Sonja C.

    2015-01-01

    Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech–motor output. Understanding the neuro-genetic mechanisms involved in the correct development and function of these pathways will shed light on how humans can effortlessly and innately use spoken language and help to elucidate what goes wrong in speech-language disorders. FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that includes receptive and expressive language impairments. The neuro-molecular mechanisms controlled by FOXP2 will give insight into our capacity for speech–motor control, but are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid (RA) signaling and to modify the cellular response to RA, a key regulator of brain development. Here we explore evidence that FOXP2 and RA function in overlapping pathways. We summate evidence at molecular, cellular, and behavioral levels that suggest an interplay between FOXP2 and RA that may be important for fine motor control and speech–motor output. We propose RA signaling is an exciting new angle from which to investigate how neuro-genetic mechanisms can contribute to the (spoken) language ready brain. PMID:26635706

  17. Parent Telegraphic Speech Use and Spoken Language in Preschoolers With ASD

    PubMed Central

    Bolt, Daniel M.; Meyer, Allison; Sindberg, Heidi; Ellis Weismer, Susan; Tager-Flusberg, Helen

    2015-01-01

    Purpose There is considerable controversy regarding whether to use telegraphic or grammatical input when speaking to young children with language delays, including children with autism spectrum disorder (ASD). This study examined telegraphic speech use in parents of preschoolers with ASD and associations with children's spoken language 1 year later. Method Parent–child dyads (n = 55) participated when children were, on average, 3 (Time 1) and 4 years old (Time 2). The rate at which parents omitted obligatory determiners was derived from transcripts of parent–child play sessions; measures of children's spoken language were obtained from these same transcripts. Results Telegraphic speech use varied substantially across parents. Higher rates of parent determiner omissions at Time 1 were significantly associated with lower lexical diversity in children's spoken language at Time 2, even when controlling for children's baseline lexical diversity and nonverbal IQ. Findings from path analyses supported the directionality of effects assumed in our regression analyses, although these results should be interpreted with caution due to the limited sample size. Conclusions Telegraphic input may have a negative impact on language development in young children with ASD. Future experimental research is needed to directly investigate how telegraphic input affects children's language learning and processing. PMID:26381592

  18. The time course of morphological processing during spoken word recognition in Chinese.

    PubMed

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-03-30

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  19. Overlapping networks engaged during spoken language production and its cognitive control.

    PubMed

    Geranmayeh, Fatemeh; Wise, Richard J S; Mehta, Amrish; Leech, Robert

    2014-06-25

    Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and "rest," to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production.

  20. Searching information with a natural language dialogue system: a comparison of spoken vs. written modalities.

    PubMed

    Le Bigot, Ludovic; Jamet, Eric; Rouet, Jean-François

    2004-11-01

    This paper examines the effects of spoken vs. written dialogue modalities on the effectiveness of information search with a computerized retrieval system. Forty-eight adults familiar with the use of computers were asked to carry out six information retrieval tasks, engaging with the system using either spoken or written communication. The written modality was more efficient with regard to the number of dialogue turns, length of interaction with the system and mental workload. Even though the turns lasted longer in the written mode, they appeared to yield less mental workload. Moreover, spoken and written dialogues did not differ as regards the use of pronouns and articles. The implications for the development of natural-language dialogue systems are discussed.

  1. Spoken Language Processing in the Clarissa Procedure Browser

    NASA Technical Reports Server (NTRS)

    Rayner, M.; Hockey, B. A.; Renders, J.-M.; Chatzichrisafis, N.; Farrell, K.

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station, is as far as we know the first spoken dialog system in space. We describe the objectives of the Clarissa project and the system's architecture. In particular, we focus on three key problems: grammar-based speech recognition using the Regulus toolkit; methods for open mic speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations. We first describe the grammar-based recogniser we have build using Regulus, and report experiments where we compare it against a class N-gram recogniser trained off the same 3297 utterance dataset. We obtained a 15% relative improvement in WER and a 37% improvement in semantic error rate. The grammar-based recogniser moreover outperforms the class N-gram version for utterances of all lengths from 1 to 9 words inclusive. The central problem in building an open-mic speech recognition system is being able to distinguish between commands directed at the system, and other material (cross-talk), which should be rejected. Most spoken dialogue systems make the accept/reject decision by applying a threshold to the recognition confidence score. NASA shows how a simple and general method, based on standard approaches to document classification using Support Vector Machines, can give substantially better performance, and report experiments showing a relative reduction in the task-level error rate by about 25% compared to the baseline confidence threshold method. Finally, we describe a general side-effect free dialogue management architecture that we have implemented in Clarissa, which extends the "update semantics'' framework by including task as well as dialogue information in the information state. We show that this enables elegant treatments of several dialogue management problems, including corrections, confirmations, querying of the environment, and regression

  2. Spoken Language Processing in the Clarissa Procedure Browser

    NASA Technical Reports Server (NTRS)

    Rayner, M.; Hockey, B. A.; Renders, J.-M.; Chatzichrisafis, N.; Farrell, K.

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station, is as far as we know the first spoken dialog system in space. We describe the objectives of the Clarissa project and the system's architecture. In particular, we focus on three key problems: grammar-based speech recognition using the Regulus toolkit; methods for open mic speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations. We first describe the grammar-based recogniser we have build using Regulus, and report experiments where we compare it against a class N-gram recogniser trained off the same 3297 utterance dataset. We obtained a 15% relative improvement in WER and a 37% improvement in semantic error rate. The grammar-based recogniser moreover outperforms the class N-gram version for utterances of all lengths from 1 to 9 words inclusive. The central problem in building an open-mic speech recognition system is being able to distinguish between commands directed at the system, and other material (cross-talk), which should be rejected. Most spoken dialogue systems make the accept/reject decision by applying a threshold to the recognition confidence score. NASA shows how a simple and general method, based on standard approaches to document classification using Support Vector Machines, can give substantially better performance, and report experiments showing a relative reduction in the task-level error rate by about 25% compared to the baseline confidence threshold method. Finally, we describe a general side-effect free dialogue management architecture that we have implemented in Clarissa, which extends the "update semantics'' framework by including task as well as dialogue information in the information state. We show that this enables elegant treatments of several dialogue management problems, including corrections, confirmations, querying of the environment, and regression

  3. The genetic bases of speech sound disorders: evidence from spoken and written language.

    PubMed

    Lewis, Barbara A; Shriberg, Lawrence D; Freebairn, Lisa A; Hansen, Amy J; Stein, Catherine M; Taylor, H Gerry; Iyengar, Sudha K

    2006-12-01

    The purpose of this article is to review recent findings suggesting a genetic susceptibility for speech sound disorders (SSD), the most prevalent communication disorder in early childhood. The importance of genetic studies of SSD and the hypothetical underpinnings of these genetic findings are reviewed, as well as genetic associations of SSD with other language and reading disabilities. The authors propose that many genes contribute to SSD. They further hypothesize that some genes contribute to SSD disorders alone, whereas other genes influence both SSD and other written and spoken language disorders. The authors postulate that underlying common cognitive traits, or endophenotypes, are responsible for shared genetic influences of spoken and written language. They review findings from their genetic linkage study and from the literature to illustrate recent developments in this area. Finally, they discuss challenges for identifying genetic influence on SSD and propose a conceptual framework for study of the genetic basis of SSD.

  4. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    PubMed

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  5. Differences between young and older adults' spoken language production in descriptions of negative versus neutral pictures.

    PubMed

    Castro, Nichol; James, Lori E

    2014-01-01

    Young and older participants produced oral picture descriptions that were analyzed to determine the impact of negative emotional content on spoken language production. An interaction was found for speech disfluencies: young adults' disfluencies did not vary, whereas older adults' disfluencies increased, for negative compared to neutral pictures. Young adults adopted a faster speech rate while describing negative compared to neutral pictures, but older adults did not. Reference errors were uncommon for both age groups, but occurred more during descriptions of negative than neutral pictures. Our findings indicate that negative content can be differentially disruptive to older adults' spoken language production, and add to the literature on aging, emotion, and cognition by exploring effects within the domain of language production.

  6. The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing

    PubMed Central

    Gow, David W.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237

  7. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children With Autism Spectrum Disorders

    PubMed Central

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2010-01-01

    Purpose The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child’s language level in order to establish a framework for comparing outcomes across intervention studies. Method The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. Results The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children’s spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age. PMID:19380608

  8. Defining spoken language benchmarks and selecting measures of expressive language development for young children with autism spectrum disorders.

    PubMed

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2009-06-01

    The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child's language level in order to establish a framework for comparing outcomes across intervention studies. The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children's spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age.

  9. Predictors of spoken language development following pediatric cochlear implantation.

    PubMed

    Boons, Tinne; Brokx, Jan P L; Dhooge, Ingeborg; Frijns, Johan H M; Peeraer, Louis; Vermeulen, Anneke; Wouters, Jan; van Wieringen, Astrid

    2012-01-01

    Although deaf children with cochlear implants (CIs) are able to develop good language skills, the large variability in outcomes remains a significant concern. The first aim of this study was to evaluate language skills in children with CIs to establish benchmarks. The second aim was to make an estimation of the optimal age at implantation to provide maximal opportunities for the child to achieve good language skills afterward. The third aim was to gain more insight into the causes of variability to set recommendations for optimizing the rehabilitation process of prelingually deaf children with CIs. Receptive and expressive language development of 288 children who received CIs by age five was analyzed in a retrospective multicenter study. Outcome measures were language quotients (LQs) on the Reynell Developmental Language Scales and Schlichting Expressive Language Test at 1, 2, and 3 years after implantation. Independent predictive variables were nine child-related, environmental, and auditory factors. A series of multiple regression analyses determined the amount of variance in expressive and receptive language outcomes attributable to each predictor when controlling for the other variables. Simple linear regressions with age at first fitting and independent samples t tests demonstrated that children implanted before the age of two performed significantly better on all tests than children who were implanted at an older age. The mean LQ was 0.78 with an SD of 0.18. A child with an LQ lower than 0.60 (= 0.78-0.18) within 3 years after implantation was labeled as a weak performer compared with other deaf children implanted before the age of two. Contralateral stimulation with a second CI or a hearing aid and the absence of additional disabilities were related to better language outcomes. The effect of environmental factors, comprising multilingualism, parental involvement, and communication mode increased over time. Three years after implantation, the total multiple

  10. Are Young Children with Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    ERIC Educational Resources Information Center

    Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…

  11. Splenium Development and Early Spoken Language in Human Infants

    ERIC Educational Resources Information Center

    Swanson, Meghan R.; Wolff, Jason J.; Elison, Jed T.; Gu, Hongbin; Hazlett, Heather C.; Botteron, Kelly; Styner, Martin; Paterson, Sarah; Gerig, Guido; Constantino, John; Dager, Stephen; Estes, Annette; Vachet, Clement; Piven, Joseph

    2017-01-01

    The association between developmental trajectories of language-related white matter fiber pathways from 6 to 24 months of age and individual differences in language production at 24 months of age was investigated. The splenium of the corpus callosum, a fiber pathway projecting through the posterior hub of the default mode network to occipital…

  12. Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on English learners include: (1) Top 20 EL languages, as reported in states' top five lists: SY 2011-12; (2) States, including DC, with 80 percent or…

  13. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language.

    PubMed

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H2 (15)O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of

  14. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    PubMed Central

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  15. Is There a Correlation between Languages Spoken and Intricate Movements of Tongue? A Comparative Study of Various Movements of Tongue among the Three Ethnic Races of Malaysia

    PubMed Central

    Nayak, Satheesha B; Awal, Mahfuzah Binti; Han, Chang Wei; Sivaram, Ganeshram; Vigneswaran, Thimesha; Choon, Tee Lian

    2016-01-01

    Introduction Tongue is mainly used for taste, chewing and in speech. In the present study, we focused on the secondary function of the tongue as to how it is used in phonetic pronunciation and linguistics and how these factors affect tongue movements. Objective To compare all possible movements of tongue among Malaysians belonging to three ethnic races and to find out if there is any link between languages spoken and ability to perform various tongue movements. Materials and Methods A total of 450 undergraduate medical students participated in the study. The students were chosen from three different races i.e. Malays, Chinese and Indians (Malaysian Indians). Data was collected from the students through a semi-structured interview following which each student was asked to demonstrate various tongue movements like protrusion, retraction, flattening, rolling, twisting, folding or any other special movements. The data obtained was first segregated and analysed according to gender, race and types and dialects of languages spoken. Results We found that most of the Malaysians were able to perform the basic movements of tongue like protrusion, flattening movements and very few were able to perform twisting and folding of the tongue. The ability to perform normal tongue movements and special movements like folding, twisting, rolling and others was higher among Indians when compared to Malay and Chinese. Conclusion Languages spoken by Indians involve detailed tongue rolling and folding in pronouncing certain words and may be the reason as to why Indians are more versatile with tongue movements as compared to the other two races amongst Malaysians. It may be a possibility that languages spoken by a person serves as a variable that increases their ability to perform special tongue movements besides influenced by the genetic makeup of a person. PMID:26894051

  16. Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders.

    PubMed

    Ellawadi, Allison Bean; Ellis Weismer, Susan

    2015-11-01

    Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth.

  17. Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders

    PubMed Central

    Weismer, Susan Ellis

    2015-01-01

    Purpose Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. Method The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. Results The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. Conclusion The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth. PMID:26254475

  18. Spoken word recognition by Latino children learning Spanish as their first language*

    PubMed Central

    HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE

    2010-01-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157

  19. Spoken word recognition by Latino children learning Spanish as their first language.

    PubMed

    Hurtado, Nereyda; Marchman, Virginia A; Fernald, Anne

    2007-05-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range= 1 ;3-3; 1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development.

  20. La mort d'une langue: le judeo-espagnol (The Death of a Language: The Spanish Spoken by Jews)

    ERIC Educational Resources Information Center

    Renard, Raymond

    1971-01-01

    Describes the Sephardic culture which flourished in the Balkans, Ottoman Empire, and North Africa during the Middle Ages. Suggests the use of Ladino", the language of medieval Spain spoken by the expelled Jews. (DS)

  1. La mort d'une langue: le judeo-espagnol (The Death of a Language: The Spanish Spoken by Jews)

    ERIC Educational Resources Information Center

    Renard, Raymond

    1971-01-01

    Describes the Sephardic culture which flourished in the Balkans, Ottoman Empire, and North Africa during the Middle Ages. Suggests the use of Ladino", the language of medieval Spain spoken by the expelled Jews. (DS)

  2. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    PubMed

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language.

  3. The interface between spoken and written language: developmental disorders.

    PubMed

    Hulme, Charles; Snowling, Margaret J

    2014-01-01

    We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter-sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills).

  4. The interface between spoken and written language: developmental disorders

    PubMed Central

    Hulme, Charles; Snowling, Margaret J.

    2014-01-01

    We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter–sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills). PMID:24324239

  5. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    ERIC Educational Resources Information Center

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  6. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    ERIC Educational Resources Information Center

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  7. The Assessment of Spoken Language under Varying Interactional Conditions.

    ERIC Educational Resources Information Center

    Berry, Vivien

    Two studies of individuals' oral second language performance in interaction with extraverts and introverts are reported here. The first, described briefly, investigated the effects of homogeneous (extravert/extravert or introvert/introvert) vs. heterogeneous pairings on oral performance in interviews. Subjects were 36 women students in a Japanese…

  8. Enriching English Language Spoken Outputs of Kindergartners in Thailand

    ERIC Educational Resources Information Center

    Wilang, Jeffrey Dawala; Sinwongsuwat, Kemtong

    2012-01-01

    This year is designated as Thailand's "English Speaking Year" with the aim of improving the communicative competence of Thais for the upcoming integration of the Association of Southeast Asian Nations (ASEAN) in 2015. The consistent low-level proficiency of the Thais in the English language has led to numerous curriculum revisions and…

  9. Apples and Oranges: Developmental Discontinuities in Spoken-Language Processing?

    PubMed Central

    Creel, Sarah C.; Quam, Carolyn

    2015-01-01

    Much research focuses on speech processing in infancy, sometimes generating the impression that speech-sound categories do not develop further. Yet other studies suggest substantial plasticity throughout mid-childhood. Differences between infant versus child and adult experimental methods currently obscure how language processing changes across childhood, calling for approaches that span development. PMID:26456261

  10. Give and Take: Syntactic Priming during Spoken Language Comprehension

    ERIC Educational Resources Information Center

    Thothathiri, Malathi; Snedeker, Jesse

    2008-01-01

    Syntactic priming during language production is pervasive and well-studied. Hearing, reading, speaking or writing a sentence with a given structure increases the probability of subsequently producing the same structure, regardless of whether the prime and target share lexical content. In contrast, syntactic priming during comprehension has proven…

  11. Give and Take: Syntactic Priming during Spoken Language Comprehension

    ERIC Educational Resources Information Center

    Thothathiri, Malathi; Snedeker, Jesse

    2008-01-01

    Syntactic priming during language production is pervasive and well-studied. Hearing, reading, speaking or writing a sentence with a given structure increases the probability of subsequently producing the same structure, regardless of whether the prime and target share lexical content. In contrast, syntactic priming during comprehension has proven…

  12. Spoken Language Vocabulary and Structural Frequency Count: English Data Analyses.

    ERIC Educational Resources Information Center

    Miron, Murray S.

    The report is a frequency analysis of vocabulary and sentence patterns in the English language. The corpora used are a media sample, a discussion session, elicited sentences, and words elicited for frame sentences. The outputs are the following frequency tables: (1) Semantic frequency of combined corpus (media, discussion, elicited sentences)…

  13. Spoken Language Vocabulary and Structural Frequency Count: Japanese Data Analyses.

    ERIC Educational Resources Information Center

    Sukle, Robert J.; And Others

    The report is a frequency analysis of vocabulary and sentence patterns in the Japanese language. The corpora used are a media sample, a discussion session, elicited sentences, and words elicited for frame sentences. The outputs are the following frequency tables: (1) semantic frequency of combined corpus (media, discussion, elicited sentences)…

  14. Spoken Language Vocabulary and Structural Frequency Count: Swahili Data Analyses.

    ERIC Educational Resources Information Center

    Rubama, Ibrahim; And Others

    The report is a frequency analysis of vocabulary and sentence patterns in the Swahili language. The corpora used are a media sample, a discussion session, elicited sentences, and words elicited for frame sentences. The outputs are the following frequency tables: (1) semantic frequency of combined corpus (media, discussion, elicited sentences)…

  15. Access to Meaning/Spoken and Written Language.

    ERIC Educational Resources Information Center

    Pinnell, Gay Su, Ed.; King, Martha L., Ed.

    1984-01-01

    Designed to explore the ways language functions to help children gain access to meaning as they progress through the educational system, this journal issue views communication as a social, interactive process in which speakers and writers attempt to link into what listeners and readers know, want to know, or need to know. The 12 articles in the…

  16. International Curriculum for Chinese Language Education

    ERIC Educational Resources Information Center

    Scrimgeour, Andrew; Wilson, Philip

    2009-01-01

    The International Curriculum for Chinese Language Education (ICCLE) represents a significant initiative by the Office of Chinese Language Council International (Hanban) to organise and describe objectives and content for a standardised Chinese language curriculum around the world. It aims to provide a reference curriculum for planning, a framework…

  17. Medical practices display power law behaviors similar to spoken languages.

    PubMed

    Paladino, Jonathan D; Crooke, Philip S; Brackney, Christopher R; Kaynar, A Murat; Hotchkiss, John R

    2013-09-04

    Medical care commonly involves the apprehension of complex patterns of patient derangements to which the practitioner responds with patterns of interventions, as opposed to single therapeutic maneuvers. This complexity renders the objective assessment of practice patterns using conventional statistical approaches difficult. Combinatorial approaches drawn from symbolic dynamics are used to encode the observed patterns of patient derangement and associated practitioner response patterns as sequences of symbols. Concatenating each patient derangement symbol with the contemporaneous practitioner response symbol creates "words" encoding the simultaneous patient derangement and provider response patterns and yields an observed vocabulary with quantifiable statistical characteristics. A fundamental observation in many natural languages is the existence of a power law relationship between the rank order of word usage and the absolute frequency with which particular words are uttered. We show that population level patterns of patient derangement: practitioner intervention word usage in two entirely unrelated domains of medical care display power law relationships similar to those of natural languages, and that-in one of these domains-power law behavior at the population level reflects power law behavior at the level of individual practitioners. Our results suggest that patterns of medical care can be approached using quantitative linguistic techniques, a finding that has implications for the assessment of expertise, machine learning identification of optimal practices, and construction of bedside decision support tools.

  18. The Specificity of Sound Symbolic Correspondences in Spoken Language.

    PubMed

    Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L

    2016-12-29

    Although language has long been regarded as a primarily arbitrary system, sound symbolism, or non-arbitrary correspondences between the sound of a word and its meaning, also exists in natural language. Previous research suggests that listeners are sensitive to sound symbolism. However, little is known about the specificity of these mappings. This study investigated whether sound symbolic properties correspond to specific meanings, or whether these properties generalize across semantic dimensions. In three experiments, native English-speaking adults heard sound symbolic foreign words for dimensional adjective pairs (big/small, round/pointy, fast/slow, moving/still) and for each foreign word, selected a translation among English antonyms that either matched or mismatched with the correct meaning dimension. Listeners agreed more reliably on the English translation for matched relative to mismatched dimensions, though reliable cross-dimensional mappings did occur. These findings suggest that although sound symbolic properties generalize to meanings that may share overlapping semantic features, sound symbolic mappings offer semantic specificity.

  19. Medical practices display power law behaviors similar to spoken languages

    PubMed Central

    2013-01-01

    Background Medical care commonly involves the apprehension of complex patterns of patient derangements to which the practitioner responds with patterns of interventions, as opposed to single therapeutic maneuvers. This complexity renders the objective assessment of practice patterns using conventional statistical approaches difficult. Methods Combinatorial approaches drawn from symbolic dynamics are used to encode the observed patterns of patient derangement and associated practitioner response patterns as sequences of symbols. Concatenating each patient derangement symbol with the contemporaneous practitioner response symbol creates “words” encoding the simultaneous patient derangement and provider response patterns and yields an observed vocabulary with quantifiable statistical characteristics. Results A fundamental observation in many natural languages is the existence of a power law relationship between the rank order of word usage and the absolute frequency with which particular words are uttered. We show that population level patterns of patient derangement: practitioner intervention word usage in two entirely unrelated domains of medical care display power law relationships similar to those of natural languages, and that–in one of these domains–power law behavior at the population level reflects power law behavior at the level of individual practitioners. Conclusions Our results suggest that patterns of medical care can be approached using quantitative linguistic techniques, a finding that has implications for the assessment of expertise, machine learning identification of optimal practices, and construction of bedside decision support tools. PMID:24007376

  20. The missing foundation in teacher education: Knowledge of the structure of spoken and written language.

    PubMed

    Moats, L C

    1994-01-01

    Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.

  1. Preliminary findings of similarities and differences in the signed and spoken language of children with autism.

    PubMed

    Shield, Aaron

    2014-11-01

    Approximately 30% of hearing children with autism spectrum disorder (ASD) do not acquire expressive language, and those who do often show impairments related to their social deficits, using language instrumentally rather than socially, with a poor understanding of pragmatics and a tendency toward repetitive content. Linguistic abnormalities can be clinically useful as diagnostic markers of ASD and as targets for intervention. Studies have begun to document how ASD manifests in children who are deaf for whom signed languages are the primary means of communication. Though the underlying disorder is presumed to be the same in children who are deaf and children who hear, the structures of signed and spoken languages differ in key ways. This article describes similarities and differences between the signed and spoken language acquisition of children on the spectrum. Similarities include echolalia, pronoun avoidance, neologisms, and the existence of minimally verbal children. Possible areas of divergence include pronoun reversal, palm reversal, and facial grammar. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. Reading, Writing, and Spoken Language Assessment Profiles for Students Who Are Deaf and Hard of Hearing Compared with Students with Language Learning Disabilities

    ERIC Educational Resources Information Center

    Nelson, Nickola Wolf; Crumpton, Teresa

    2015-01-01

    Working with students who are deaf or hard of hearing (DHH) can raise questions about whether language and literacy delays and difficulties are related directly to late and limited access to spoken language, to co-occurring language learning disabilities (LLD), or to both. A new Test of Integrated Language and Literacy Skills, which incorporates…

  3. Reading, Writing, and Spoken Language Assessment Profiles for Students Who Are Deaf and Hard of Hearing Compared with Students with Language Learning Disabilities

    ERIC Educational Resources Information Center

    Nelson, Nickola Wolf; Crumpton, Teresa

    2015-01-01

    Working with students who are deaf or hard of hearing (DHH) can raise questions about whether language and literacy delays and difficulties are related directly to late and limited access to spoken language, to co-occurring language learning disabilities (LLD), or to both. A new Test of Integrated Language and Literacy Skills, which incorporates…

  4. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    ERIC Educational Resources Information Center

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  5. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    ERIC Educational Resources Information Center

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  6. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    NASA Astrophysics Data System (ADS)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  7. Implicit Spoken Words and Motor Sequences Learning Are Impaired in Children with Specific Language Impairment.

    PubMed

    Desmottes, Lise; Meulemans, Thierry; Maillart, Christelle

    2016-05-01

    This study aims to compare verbal and motor implicit sequence learning abilities in children with and without specific language impairment (SLI). Forty-eight children (24 control and 24 SLI) were administered the Serial Search Task (SST), which enables the simultaneous assessment of implicit spoken words and visuomotor sequences learning. Results showed that control children implicitly learned both the spoken words as well as the motor sequences. In contrast, children with SLI showed deficits in both types of learning. Moreover, correlational analyses revealed that SST performance was linked with grammatical abilities in control children but with lexical abilities in children with SLI. Overall, this pattern of results supports the procedural deficit hypothesis and suggests that domain general implicit sequence learning is impaired in SLI.

  8. Development of Lexical-Semantic Language System: N400 Priming Effect for Spoken Words in 18- and 24-Month Old Children

    ERIC Educational Resources Information Center

    Rama, Pia; Sirri, Louah; Serres, Josette

    2013-01-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…

  9. Development of Lexical-Semantic Language System: N400 Priming Effect for Spoken Words in 18- and 24-Month Old Children

    ERIC Educational Resources Information Center

    Rama, Pia; Sirri, Louah; Serres, Josette

    2013-01-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…

  10. Setting the Tone: An ERP Investigation of the Influences of Phonological Similarity on Spoken Word Recognition in Mandarin Chinese

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2012-01-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following…

  11. Teaching Pragmatic Awareness of Spoken Requests to Chinese EAP Learners in the UK: Is Explicit Instruction Effective?

    ERIC Educational Resources Information Center

    Halenko, Nicola; Jones, Christian

    2011-01-01

    The aim of this study is to evaluate the impact of explicit interventional treatment on developing pragmatic awareness and production of spoken requests in an EAP context (taken here to mean those studying/using English for academic purposes in the UK) with Chinese learners of English at a British higher education institution. The study employed…

  12. Setting the Tone: An ERP Investigation of the Influences of Phonological Similarity on Spoken Word Recognition in Mandarin Chinese

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2012-01-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following…

  13. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study

    PubMed Central

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-01-01

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150–250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300–500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500–700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements. PMID:27180951

  14. Event-related potential evidence on the influence of accentuation in spoken discourse comprehension in Chinese.

    PubMed

    Li, Xiaoqing; Hagoort, Peter; Yang, Yufang

    2008-05-01

    In an event-related potential experiment with Chinese discourses as material, we investigated how and when accentuation influences spoken discourse comprehension in relation to the different information states of the critical words. These words could either provide new or old information. It was shown that variation of accentuation influenced the amplitude of the N400, with a larger amplitude for accented than for deaccented words. In addition, there was an interaction between accentuation and information state. The N400 amplitude difference between accented and deaccented new information was smaller than that between accented and deaccented old information. The results demonstrate that, during spoken discourse comprehension, listeners rapidly extract the semantic consequences of accentuation in relation to the previous discourse context. Moreover, our results show that the N400 amplitude can be larger for correct (new, accented words) than incorrect (new, deaccented words) information. This, we argue, proves that the N400 does not react to semantic anomaly per se, but rather to semantic integration load, which is higher for new information.

  15. Orthographic effects in spoken language: on-line activation or phonological restructuring?

    PubMed

    Perre, Laetitia; Pattamadilok, Chotiga; Montant, Marie; Ziegler, Johannes C

    2009-06-12

    Previous research has shown that literacy (i.e., learning to read and spell) affects spoken language processing. However, there is an on-going debate about the nature of this influence. Some argued that orthography is co-activated on-line whenever we hear a spoken word. Others suggested that orthography is not activated on-line but has changed the nature of the phonological representations. Finally, both effects might occur simultaneously, that is, orthography might be activated on-line in addition to having changed the nature of the phonological representations. Previous studies have not been able to tease apart these hypotheses. The present study started by replicating the finding of an orthographic consistency effect in spoken word recognition using event-related brain potentials (ERPs): words with multiple spellings (i.e., inconsistent words) differed from words with unique spellings (i.e., consistent words) as early as 330 ms after the onset of the target. We then employed standardized low resolution electromagnetic tomography (sLORETA) to determine the possible underlying cortical generators of this effect. The results showed that the orthographic consistency effect was clearly localized in a classic phonological area (left BA40). No evidence was found for activation in the posterior cortical areas coding orthographic information, such as the visual word form area in the left fusiform gyrus (BA37). This finding is consistent with the restructuring hypothesis according to which phonological representations are "contaminated" by orthographic knowledge.

  16. Spoken language development in oral preschool children with permanent childhood deafness.

    PubMed

    Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.

  17. Symbolic gestures and spoken language are processed by a common neural system.

    PubMed

    Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R

    2009-12-08

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.

  18. Symbolic gestures and spoken language are processed by a common neural system

    PubMed Central

    Xu, Jiang; Gannon, Patrick J.; Emmorey, Karen; Smith, Jason F.; Braun, Allen R.

    2009-01-01

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating “be quiet”), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects. PMID:19923436

  19. Predictors of Early Reading Skill in 5-Year-Old Children with Hearing Loss Who Use Spoken Language

    ERIC Educational Resources Information Center

    Cupples, Linda; Ching, Teresa Y. C.; Crowe, Kathryn; Day, Julia; Seeto, Mark

    2014-01-01

    This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 five-Year-Old Children with prelingual hearing losses ranging from mild to profound who communicated primarily via spoken language. All participants were fitted…

  20. Spoken Language Comprehension of Phrases, Simple and Compound-Active Sentences in Non-Speaking Children with Severe Cerebral Palsy

    ERIC Educational Resources Information Center

    Geytenbeek, Joke J. M.; Heim, Margriet J. M.; Knol, Dirk L.; Vermeulen, R. Jeroen; Oostrom, Kim J.

    2015-01-01

    Background Children with severe cerebral palsy (CP) (i.e. "non-speaking children with severely limited mobility") are restricted in many domains that are important to the acquisition of language. Aims To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. Methods & Procedures…

  1. Spoken Language Comprehension of Phrases, Simple and Compound-Active Sentences in Non-Speaking Children with Severe Cerebral Palsy

    ERIC Educational Resources Information Center

    Geytenbeek, Joke J. M.; Heim, Margriet J. M.; Knol, Dirk L.; Vermeulen, R. Jeroen; Oostrom, Kim J.

    2015-01-01

    Background Children with severe cerebral palsy (CP) (i.e. "non-speaking children with severely limited mobility") are restricted in many domains that are important to the acquisition of language. Aims To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. Methods & Procedures…

  2. Predictors of Early Reading Skill in 5-Year-Old Children with Hearing Loss Who Use Spoken Language

    ERIC Educational Resources Information Center

    Cupples, Linda; Ching, Teresa Y. C.; Crowe, Kathryn; Day, Julia; Seeto, Mark

    2014-01-01

    This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 five-Year-Old Children with prelingual hearing losses ranging from mild to profound who communicated primarily via spoken language. All participants were fitted…

  3. The evolutionary history of genes involved in spoken and written language: beyond FOXP2.

    PubMed

    Mozzi, Alessandra; Forni, Diego; Clerici, Mario; Pozzoli, Uberto; Mascheretti, Sara; Guerini, Franca R; Riva, Stefania; Bresolin, Nereo; Cagliani, Rachele; Sironi, Manuela

    2016-02-25

    Humans possess a communication system based on spoken and written language. Other animals can learn vocalization by imitation, but this is not equivalent to human language. Many genes were described to be implicated in language impairment (LI) and developmental dyslexia (DD), but their evolutionary history has not been thoroughly analyzed. Herein we analyzed the evolution of ten genes involved in DD and LI. Results show that the evolutionary history of LI genes for mammals and aves was comparable in vocal-learner species and non-learners. For the human lineage, several sites showing evidence of positive selection were identified in KIAA0319 and were already present in Neanderthals and Denisovans, suggesting that any phenotypic change they entailed was shared with archaic hominins. Conversely, in FOXP2, ROBO1, ROBO2, and CNTNAP2 non-coding changes rose to high frequency after the separation from archaic hominins. These variants are promising candidates for association studies in LI and DD.

  4. The evolutionary history of genes involved in spoken and written language: beyond FOXP2

    PubMed Central

    Mozzi, Alessandra; Forni, Diego; Clerici, Mario; Pozzoli, Uberto; Mascheretti, Sara; Guerini, Franca R.; Riva, Stefania; Bresolin, Nereo; Cagliani, Rachele; Sironi, Manuela

    2016-01-01

    Humans possess a communication system based on spoken and written language. Other animals can learn vocalization by imitation, but this is not equivalent to human language. Many genes were described to be implicated in language impairment (LI) and developmental dyslexia (DD), but their evolutionary history has not been thoroughly analyzed. Herein we analyzed the evolution of ten genes involved in DD and LI. Results show that the evolutionary history of LI genes for mammals and aves was comparable in vocal-learner species and non-learners. For the human lineage, several sites showing evidence of positive selection were identified in KIAA0319 and were already present in Neanderthals and Denisovans, suggesting that any phenotypic change they entailed was shared with archaic hominins. Conversely, in FOXP2, ROBO1, ROBO2, and CNTNAP2 non-coding changes rose to high frequency after the separation from archaic hominins. These variants are promising candidates for association studies in LI and DD. PMID:26912479

  5. Removal of muscle artifacts from EEG recordings of spoken language production.

    PubMed

    De Vos, Maarten; Vos, De Maarten; Riès, Stephanie; Vanderperren, Katrien; Vanrumste, Bart; Alario, Francois-Xavier; Van Huffel, Sabine; Huffel, Van Sabine; Burle, Boris

    2010-06-01

    Research on the neural basis of language processing has often avoided investigating spoken language production by fear of the electromyographic (EMG) artifacts that articulation induces on the electro-encephalogram (EEG) signal. Indeed, such articulation artifacts are typically much larger than the brain signal of interest. Recently, a Blind Source Separation technique based on Canonical Correlation Analysis was proposed to separate tonic muscle artifacts from continuous EEG recordings in epilepsy. In this paper, we show how the same algorithm can be adapted to remove the short EMG bursts due to articulation on every trial. Several analyses indicate that this method accurately attenuates the muscle contamination on the EEG recordings, providing to the neurolinguistic community a powerful tool to investigate the brain processes at play during overt language production.

  6. Brain Basis of Phonological Awareness for Spoken Language in Children and Its Disruption in Dyslexia

    PubMed Central

    Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.

    2012-01-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783

  7. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    PubMed Central

    Hirschmüller, Sarah; Egloff, Boris

    2016-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135

  8. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    PubMed

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  9. Copula filtration of spoken language signals on the background of acoustic noise

    NASA Astrophysics Data System (ADS)

    Kolchenko, Lilia V.; Sinitsyn, Rustem B.

    2010-09-01

    This paper is devoted to the filtration of acoustic signals on the background of acoustic noise. Signal filtering is done with the help of a nonlinear analogue of a correlation function - a copula. The copula is estimated with the help of kernel estimates of the cumulative distribution function. At the second stage we suggest a new procedure of adaptive filtering. The silence and sound intervals are detected before the filtration with the help of nonparametric algorithm. The results are confirmed by experimental processing of spoken language signals.

  10. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    PubMed

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics.

  11. Immediate integration of different types of prosodic information during on-line spoken language comprehension: an ERP study.

    PubMed

    Li, Xiaoqing; Chen, Yiya; Yang, Yufang

    2011-04-22

    An event-related brain potentials (ERP) experiment was carried out to investigate the role of prosodic prominence and prosodic boundary as well as their interaction in spoken discourse comprehension. Chinese question-answer dialogues were used as stimuli. The answer sentence is a syntactically ambiguous phrase, with a prosodic phrase boundary at the immediate left side of the critical word in the carrier sentence. Meanwhile, the critical word was accented. We manipulated the question context while keeping the speech signal of the answer sentence constant, which gives rise to congruent and in-congruent question-answer pairs with violations of prosodic prominence, prosodic boundary, or both. Results showed that prosodic prominence violation evoked a frontal-central negative effect (270-510ms), while prosodic boundary violation elicited a broadly distributed negative effect (270-510ms and 510-660ms). The effect of combined prominence-boundary violation was similar to that of the single prosodic prominence violation. Furthermore, there was an interaction between the effect of prosodic prominence violation and the effect of prosodic boundary violation in the window latency of 270-510ms, which suggests an immediate interaction between the semantic processing of prosodic prominence and the syntactic processing of prosodic boundary during spoken language comprehension. In addition, a detailed analysis of the obtained negativity effects showed that the size of the negative effect to the prosodic boundary violation was increased by an additional prosodic prominence violation, but the size of the negative effect to the prosodic prominence violation was not affected by an additional prosodic boundary violation, which suggests an asymmetry between the effects of prosodic prominence and prosodic boundary. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Follow-up study investigating the benefits of phonological awareness intervention for children with spoken language impairment.

    PubMed

    Gillon, Gail T

    2002-01-01

    The efficacy of phonological awareness intervention for children at risk for reading disorder has received increasing attention in the literature. This paper reports the follow-up data for participants in the Gillon (2000a) intervention study. The performance of twenty, 5-7-year-old New Zealand children with spoken language impairment, who received phonological awareness intervention, was compared with the progress made by 20 children from a control group and 20 children with typical language development approximately 11 months post-intervention. The children with spoken language impairment all had expressive phonological difficulties and demonstrated delay in early reading development. Treatment effects on strengthening phoneme-grapheme connections in spelling development were also investigated. The results suggested that structured phonological awareness intervention led to sustained growth in phoneme awareness and word-recognition performance. At the follow-up assessment, the majority of the children who received intervention were reading at, or above, the level expected for their age on a measure of word recognition. The phonological awareness intervention also significantly strengthened phoneme-grapheme connections in spelling as evidenced by improved non-word spelling ability. In contrast, the control group of children with spoken language impairment who did not receive phonological awareness intervention showed remarkably little improvement in phoneme awareness over time and the majority remained poor readers. The results highlight the important role speech-language therapists can play in enhancing the early reading and spelling development of children with spoken language impairment.

  13. Auditory-verbal therapy for promoting spoken language development in children with permanent hearing impairments.

    PubMed

    Brennan-Jones, Christopher G; White, Jo; Rush, Robert W; Law, James

    2014-03-12

    Congenital or early-acquired hearing impairment poses a major barrier to the development of spoken language and communication. Early detection and effective (re)habilitative interventions are essential for parents and families who wish their children to achieve age-appropriate spoken language. Auditory-verbal therapy (AVT) is a (re)habilitative approach aimed at children with hearing impairments. AVT comprises intensive early intervention therapy sessions with a focus on audition, technological management and involvement of the child's caregivers in therapy sessions; it is typically the only therapy approach used to specifically promote avoidance or exclusion of non-auditory facial communication. The primary goal of AVT is to achieve age-appropriate spoken language and for this to be used as the primary or sole method of communication. AVT programmes are expanding throughout the world; however, little evidence can be found on the effectiveness of the intervention. To assess the effectiveness of auditory-verbal therapy (AVT) in developing receptive and expressive spoken language in children who are hearing impaired. CENTRAL, MEDLINE, EMBASE, PsycINFO, CINAHL, speechBITE and eight other databases were searched in March 2013. We also searched two trials registers and three theses repositories, checked reference lists and contacted study authors to identify additional studies. The review considered prospective randomised controlled trials (RCTs) and quasi-randomised studies of children (birth to 18 years) with a significant (≥ 40 dBHL) permanent (congenital or early-acquired) hearing impairment, undergoing a programme of auditory-verbal therapy, administered by a certified auditory-verbal therapist for a period of at least six months. Comparison groups considered for inclusion were waiting list and treatment as usual controls. Two review authors independently assessed titles and abstracts identified from the searches and obtained full-text versions of all potentially

  14. A Spoken-Language Intervention for School-Aged Boys With Fragile X Syndrome.

    PubMed

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-05-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived language-support strategies. All sessions were implemented through distance videoteleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data were collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies, and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed.

  15. A Spoken Language Intervention for School-Aged Boys with fragile X Syndrome

    PubMed Central

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2015-01-01

    Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214

  16. The sound of motion in spoken language: visual information conveyed by acoustic properties of speech.

    PubMed

    Shintel, Hadas; Nusbaum, Howard C

    2007-12-01

    Language is generally viewed as conveying information through symbols whose form is arbitrarily related to their meaning. This arbitrary relation is often assumed to also characterize the mental representations underlying language comprehension. We explore the idea that visuo-spatial information can be analogically conveyed through acoustic properties of speech and that such information is integrated into an analog perceptual representation as a natural part of comprehension. Listeners heard sentences describing objects, spoken at varying speaking rates. After each sentence, participants saw a picture of an object and judged whether it had been mentioned in the sentence. Participants were faster to recognize the object when motion implied by speaking rate matched the motion implied by the picture. Results suggest that visuo-spatial referential information can be analogically conveyed and represented.

  17. The time course of lexical competition during spoken word recognition in Mandarin Chinese: an event-related potential study.

    PubMed

    Huang, Xianjun; Yang, Jin-Chen

    2016-01-20

    The present study investigated the effect of lexical competition on the time course of spoken word recognition in Mandarin Chinese using a unimodal auditory priming paradigm. Two kinds of competitive environments were designed. In one session (session 1), only the unrelated and the identical primes were presented before the target words. In the other session (session 2), besides the two conditions in session 1, the target words were also preceded by the cohort primes that have the same initial syllables as the targets. Behavioral results showed an inhibitory effect of the cohort competitors (primes) on target word recognition. The event-related potential results showed that the spoken word recognition processing in the middle and late latency windows is modulated by whether the phonologically related competitors are presented or not. Specifically, preceding activation of the competitors can induce direct competitions between multiple candidate words and lead to increased processing difficulties, primarily at the word disambiguation and selection stage during Mandarin Chinese spoken word recognition. The current study provided both behavioral and electrophysiological evidences for the lexical competition effect among the candidate words during spoken word recognition.

  18. Spoken Lebanese.

    ERIC Educational Resources Information Center

    Feghali, Maksoud N.

    This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…

  19. Pre-school children have better spoken language when early implanted.

    PubMed

    Cuda, Domenico; Murri, Alessandra; Guerzoni, Letizia; Fabrizi, Enrico; Mariani, Valeria

    2014-08-01

    The objectives of this study were: (1) to investigate the effect of age at cochlear implantation (CI) on vocabulary development; (2) to evaluate the age effect at CI surgery on the syntactic development; and (3) to examine the role of gender, age at first diagnosis and maternal education level on spoken language development. Retrospective study. Thirty children with congenital severe- to -profound sensorineural hearing loss (SNHL) were sampled. They were diagnosed and fitted with hearing aids through six months of age. They were implanted between 8 and 17 months of age. The MacArthur-Bates Communicative Development Inventory (MCDI) was administrated at the age of 36 months. The total productive vocabulary (word number raw score), the mean length of utterance (M3L) and the sentences complexity were analysed. The average word number raw score was 566.3 for the children implanted before 12 months of age versus 355 for those implanted later. The M3L was 8.3 for those implanted under 1 year versus 4.2 of those implanted later. The average sentences complexity was 82.3% for those receiving CI before 12 months, while it was 24.4% for those underwent at CI after 12 months. Regression analysis revealed a highly significant and negative linear effect of age at CI surgery on all outcomes. Females had better outcomes. Age at diagnosis was not correlated with the linguistic results. The mother's education level had a positive significant effect on sentences complexity. The CI in pre-school children with SNHL implanted under 1 year has a positive effect on spoken language. Females seem to have better linguistic results. Finally high maternal educational level appears to have some positive effect on language development. Copyright © 2014. Published by Elsevier Ireland Ltd.

  20. Effects of early auditory experience on the spoken language of deaf children at 3 years of age.

    PubMed

    Nicholas, Johanna Grant; Geers, Ann E

    2006-06-01

    By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44

  1. Effects of Early Auditory Experience on the Spoken Language of Deaf Children at 3 Years of Age

    PubMed Central

    Nicholas, Johanna Grant; Geers, Ann E.

    2010-01-01

    Objective: By age three, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe-profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated three-year-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 months of age and who received a cochlear implant between 12 and 38 months of age. The purpose of the analysis was to examine the effects of age, duration and type of early auditory experience on spoken language competence at age 3.5. Design: The spoken language skills of 76 children who had used a cochlear implant for at least 7 months were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included: presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting over 30 days. Results: Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant PTA threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/ age at implantation (the last two variables were practically identical since all children were tested

  2. Cortical organization for receptive language functions in Chinese, English, and Spanish: a cross-linguistic MEG study.

    PubMed

    Valaki, C E; Maestu, F; Simos, P G; Zhang, W; Fernandez, A; Amo, C M; Ortiz, T M; Papanicolaou, A C

    2004-01-01

    Chinese differs from Indo-European languages in both its written and spoken forms. Being a tonal language, tones convey lexically meaningful information. The current study examines patterns of neurophysiological activity in temporal and temporoparietal brain areas as speakers of two Indo-European languages (Spanish and English) and speakers of Mandarin-Chinese were engaged in a spoken-word recognition task that is used clinically for the presurgical determination of hemispheric dominace for receptive language functions. Brain magnetic activation profiles were obtained from 92 healthy adult volunteers: 30 monolingual native speakers of Mandarin-Chinese, 20 Spanish-speaking, and 42 native speakers of American English. Activation scans were acquired in two different whole-head MEG systems using identical testing methods. Results indicate that (a) the degree of hemispheric asymmetry in the duration of neurophysiological activity in temporal and temporoparietal regions was reduced in the Chinese group, (b) the proportion of individuals who showed bilaterally symmetric activation was significantly higher in this group, and (c) group differences in functional hemispheric asymmetry were first noted after the initial sensory processing of the word stimuli. Furthermore, group differences in the degree of hemispheric asymmetry were primarily due to greater degree of activation in the right temporoparietal region in the Chinese group, suggesting increased participation of this region in the spoken word recognition in Mandarin-Chinese.

  3. Contribution of spoken language and socio-economic background to adolescents' educational achievement at age 16 years.

    PubMed

    Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert

    2017-03-01

    Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and socio-economic background during adolescence, particularly in populations without language impairment. To investigate: (1) whether adolescents with higher educational outcomes overall had higher language abilities; and (2) associations between adolescent language ability, socio-economic background and educational outcomes, specifically in relation to Mathematics, English Language and English Literature GCSE grade. A total of 151 participants completed five standardized language assessments measuring vocabulary, comprehension of sentences and spoken paragraphs, and narrative skills and one nonverbal assessment when between 13 and 14 years old. These data were compared with the participants' educational achievement obtained upon leaving secondary education (16 years old). Univariate logistic regressions were employed to identify those language assessments and demographic factors that were associated with achieving a targeted A(*) -C grade in English Language, English Literature and Mathematics General Certificate of Secondary Education (GCSE) at 16 years. Further logistic regressions were then conducted to examine further the contribution of socio-economic background and spoken language skills in the multivariate models. Vocabulary, comprehension of sentences and spoken paragraphs, and mean length utterance in a narrative task along with socio-economic background contributed to whether participants achieved an A(*) -C grade in GCSE Mathematics and English Language and English Literature. Nonverbal ability contributed to English Language and Mathematics. The results of multivariate logistic regressions then found that vocabulary skills

  4. Saving Chinese-Language Education in Singapore

    ERIC Educational Resources Information Center

    Lee, Cher Leng

    2012-01-01

    Three-quarters of Singapore's population consists of ethnic Chinese, and yet, learning Chinese (Mandarin) has been a headache for many Singapore students. Recently, many scholars have argued that the rhetoric of language planning for Mandarin Chinese should be shifted from emphasizing its cultural value to stressing its economic value since…

  5. Saving Chinese-Language Education in Singapore

    ERIC Educational Resources Information Center

    Lee, Cher Leng

    2012-01-01

    Three-quarters of Singapore's population consists of ethnic Chinese, and yet, learning Chinese (Mandarin) has been a headache for many Singapore students. Recently, many scholars have argued that the rhetoric of language planning for Mandarin Chinese should be shifted from emphasizing its cultural value to stressing its economic value since…

  6. Spoken language and the decision to move the eyes: to what extent are language-mediated eye movements automatic?

    PubMed

    Mishra, Ramesh K; Olivers, Christian N L; Huettig, Falk

    2013-01-01

    Recent eye-tracking research has revealed that spoken language can guide eye gaze very rapidly (and closely time-locked to the unfolding speech) toward referents in the visual world. We discuss whether, and to what extent, such language-mediated eye movements are automatic rather than subject to conscious and controlled decision-making. We consider whether language-mediated eye movements adhere to four main criteria of automatic behavior, namely, whether they are fast and efficient, unintentional, unconscious, and overlearned (i.e., arrived at through extensive practice). Current evidence indicates that language-driven oculomotor behavior is fast but not necessarily always efficient. It seems largely unintentional though there is also some evidence that participants can actively use the information in working memory to avoid distraction in search. Language-mediated eye movements appear to be for the most part unconscious and have all the hallmarks of an overlearned behavior. These data are suggestive of automatic mechanisms linking language to potentially referred-to visual objects, but more comprehensive and rigorous testing of this hypothesis is needed.

  7. Spoken language interaction with model uncertainty: an adaptive human-robot interaction system

    NASA Astrophysics Data System (ADS)

    Doshi, Finale; Roy, Nicholas

    2008-12-01

    Spoken language is one of the most intuitive forms of interaction between humans and agents. Unfortunately, agents that interact with people using natural language often experience communication errors and do not correctly understand the user's intentions. Recent systems have successfully used probabilistic models of speech, language and user behaviour to generate robust dialogue performance in the presence of noisy speech recognition and ambiguous language choices, but decisions made using these probabilistic models are still prone to errors owing to the complexity of acquiring and maintaining a complete model of human language and behaviour. In this paper, a decision-theoretic model for human-robot interaction using natural language is described. The algorithm is based on the Partially Observable Markov Decision Process (POMDP), which allows agents to choose actions that are robust not only to uncertainty from noisy or ambiguous speech recognition but also unknown user models. Like most dialogue systems, a POMDP is defined by a large number of parameters that may be difficult to specify a priori from domain knowledge, and learning these parameters from the user may require an unacceptably long training period. An extension to the POMDP model is described that allows the agent to acquire a linguistic model of the user online, including new vocabulary and word choice preferences. The approach not only avoids a training period of constant questioning as the agent learns, but also allows the agent actively to query for additional information when its uncertainty suggests a high risk of mistakes. The approach is demonstrated both in simulation and on a natural language interaction system for a robotic wheelchair application.

  8. Does It Really Matter whether Students' Contributions Are Spoken versus Typed in an Intelligent Tutoring System with Natural Language?

    ERIC Educational Resources Information Center

    D'Mello, Sidney K.; Dowell, Nia; Graesser, Arthur

    2011-01-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning,…

  9. Building Language Blocks in L2 Japanese: Chunk Learning and the Development of Complexity and Fluency in Spoken Production

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2008-01-01

    This pilot study examined the development of complexity and fluency of second language (L2) spoken production among L2 learners who received extensive practice on grammatical chunks as constituent units of discourse. Twenty-two students enrolled in an elementary Japanese course at a U.S. university received classroom instruction on 40 grammatical…

  10. Spoken Sentence Comprehension in Children with Dyslexia and Language Impairment: The Roles of Syntax and Working Memory

    ERIC Educational Resources Information Center

    Robertson, Erin K.; Joanisse, Marc F.

    2010-01-01

    We examined spoken sentence comprehension in school-age children with developmental dyslexia or language impairment (LI), compared to age-matched and younger controls. Sentence-picture matching tasks were employed under three different working memory (WM) loads, two levels of syntactic difficulty, and two sentence lengths. Phonological short-term…

  11. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    ERIC Educational Resources Information Center

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  12. Differential Error Types in Second-Language Students' Written and Spoken Texts: Implications for Instruction in Writing

    ERIC Educational Resources Information Center

    Makalela, Leketi

    2004-01-01

    This article reports on an empirical study undertaken at the University of the North, South Africa, to test personal classroom observation and anecdotal evidence about the persistent gap between writing and spoken proficiencies among learners of English as a second language. A comparative and contrastive analysis of speech samples in the study…

  13. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    ERIC Educational Resources Information Center

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  14. Does It Really Matter whether Students' Contributions Are Spoken versus Typed in an Intelligent Tutoring System with Natural Language?

    ERIC Educational Resources Information Center

    D'Mello, Sidney K.; Dowell, Nia; Graesser, Arthur

    2011-01-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning,…

  15. The interface between language and attention: prosodic focus marking recruits a general attention network in spoken language comprehension.

    PubMed

    Kristensen, Line Burholt; Wang, Lin; Petersson, Karl Magnus; Hagoort, Peter

    2013-08-01

    In spoken language, pitch accent can mark certain information as focus, whereby more attentional resources are allocated to the focused information. Using functional magnetic resonance imaging, this study examined whether pitch accent, used for marking focus, recruited general attention networks during sentence comprehension. In a language task, we independently manipulated the prosody and semantic/pragmatic congruence of sentences. We found that semantic/pragmatic processing affected bilateral inferior and middle frontal gyrus. The prosody manipulation showed bilateral involvement of the superior/inferior parietal cortex, superior and middle temporal cortex, as well as inferior, middle, and posterior parts of the frontal cortex. We compared these regions with attention networks localized in an auditory spatial attention task. Both tasks activated bilateral superior/inferior parietal cortex, superior temporal cortex, and left precentral cortex. Furthermore, an interaction between prosody and congruence was observed in bilateral inferior parietal regions: for incongruent sentences, but not for congruent ones, there was a larger activation if the incongruent word carried a pitch accent, than if it did not. The common activations between the language task and the spatial attention task demonstrate that pitch accent activates a domain general attention network, which is sensitive to semantic/pragmatic aspects of language. Therefore, attention and language comprehension are highly interactive.

  16. Factors Influencing Chinese Language Learners' Strategy Use

    ERIC Educational Resources Information Center

    Sung, Ko-Yin

    2011-01-01

    This survey study, which involved 134 language learners enrolled in first-year Chinese as a foreign language classrooms in the US universities, intended to address the research question, "Do learners' strategy use differ based on the following learner differences: (1) gender; (2) home language/culture; and (3) number of other foreign languages…

  17. Factors Influencing Chinese Language Learners' Strategy Use

    ERIC Educational Resources Information Center

    Sung, Ko-Yin

    2011-01-01

    This survey study, which involved 134 language learners enrolled in first-year Chinese as a foreign language classrooms in the US universities, intended to address the research question, "Do learners' strategy use differ based on the following learner differences: (1) gender; (2) home language/culture; and (3) number of other foreign languages…

  18. The acceleration of spoken-word processing in children's native-language acquisition: an ERP cohort study.

    PubMed

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hagiwara, Hiroko

    2011-04-01

    Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children's native-language acquisition. In a picture-word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n=80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children's onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Child-centered collaborative conversations that maximize listening and spoken language development for children with hearing loss.

    PubMed

    Garber, Ashley S; Nevins, Mary Ellen

    2012-11-01

    In the period that begins with early intervention enrollment and ends with the termination of formal education, speech-language pathologists (SLPs) will have numerous opportunities to form professional relationships that can enhance any child's listening and spoken language accomplishments. SLPs who initiate and/or nurture these relationships are urged to place the needs of the child as the core value that drives decision making. Addressing this priority will allow for the collaborative conversations necessary to develop an effective intervention plan at any level. For the SLP, the purpose of these collaborative conversations will be twofold: identifying the functional communication needs of the child with hearing loss across settings and sharing practical strategies to encourage listening and spoken language skill development. Auditory first, wait time, sabotage, and thinking turns are offered as four techniques easily implemented by all service providers to support the child with hearing loss in all educational settings.

  20. A critique of Mark D. Allen's "the preservation of verb subcategory knowledge in a spoken language comprehension deficit".

    PubMed

    Kemmerer, David

    2008-07-01

    Allen [Allen, M. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. Brain and Language, 95, 255-264.] reports a single patient, WBN, who, during spoken language comprehension, is still able to access some of the syntactic properties of verbs despite being unable to access some of their semantic properties. Allen claims that these findings challenge linguistic theories which assume that much of the syntactic behavior of verbs can be predicted from their meanings. I argue, however, that this conclusion is not supported by the data for two reasons: first, Allen focuses on aspects of verb syntax that are not claimed to be influenced by verb semantics; and second, he ignores aspects of verb syntax that are claimed to be influenced by verb semantics.

  1. Taiwan's Chinese Language Development and the Creation of Language Teaching Analysis

    ERIC Educational Resources Information Center

    Tsai, Cheng-Hui; Wang, Chuan Po

    2015-01-01

    Chinese Teaching in Taiwan in recent years in response to the international trend of development, making at all levels of Chinese language teaching in full swing, for the recent boom in Chinese language teaching, many overseas Chinese language learning for children also had a passion while actively learning Chinese language, and even many overseas…

  2. Chinese Treasure Chest: An Integrated Exploratory Chinese Language & Culture Program.

    ERIC Educational Resources Information Center

    Jensen, Inge-Lise; Verg-in, Yen-ti

    This publication describes the Chinese Treasure Chest project, an exploratory Chinese language and culture program developed by two elementary school teachers in the Aleutians East Borough (Alaska) School District. The project centers on the use of a large box of materials and a program plan designed to introduce elementary students in…

  3. Foreign body aspiration and language spoken at home: 10-year review.

    PubMed

    Choroomi, S; Curotta, J

    2011-07-01

    To review foreign body aspiration cases encountered over a 10-year period in a tertiary paediatric hospital, and to assess correlation between foreign body type and language spoken at home. Retrospective chart review of all children undergoing direct laryngobronchoscopy for foreign body aspiration over a 10-year period. Age, sex, foreign body type, complications, hospital stay and home language were analysed. At direct laryngobronchoscopy, 132 children had foreign body aspiration (male:female ratio 1.31:1; mean age 32 months (2.67 years)). Mean hospital stay was 2.0 days. Foreign bodies most commonly comprised food matter (53/132; 40.1 per cent), followed by non-food matter (44/132; 33.33 per cent), a negative endoscopy (11/132; 8.33 per cent) and unknown composition (24/132; 18.2 per cent). Most parents spoke English (92/132, 69.7 per cent; vs non-English-speaking 40/132, 30.3 per cent), but non-English-speaking patients had disproportionately more food foreign bodies, and significantly more nut aspirations (p = 0.0065). Results constitute level 2b evidence. Patients from non-English speaking backgrounds had a significantly higher incidence of food (particularly nut) aspiration. Awareness-raising and public education is needed in relevant communities to prevent certain foods, particularly nuts, being given to children too young to chew and swallow them adequately.

  4. Cross-language perception of Cantonese vowels spoken by native and non-native speakers.

    PubMed

    So, Connie K; Attina, Virginie

    2014-10-01

    This study examined the effect of native language background on listeners' perception of native and non-native vowels spoken by native (Hong Kong Cantonese) and non-native (Mandarin and Australian English) speakers. They completed discrimination and an identification task with and without visual cues in clear and noisy conditions. Results indicated that visual cues did not facilitate perception, and performance was better in clear than in noisy conditions. More importantly, the Cantonese talker's vowels were the easiest to discriminate, and the Mandarin talker's vowels were as intelligible as the native talkers' speech. These results supported the interlanguage speech native intelligibility benefit patterns proposed by Hayes-Harb et al. (J Phonetics 36:664-679, 2008). The Mandarin and English listeners' identification patterns were similar to those of the Cantonese listeners, suggesting that they might have assimilated Cantonese vowels to their closest native vowels. In addition, listeners' perceptual patterns were consistent with the principles of Best's Perceptual Assimilation Model (Best in Speech perception and linguistic experience: issues in cross-language research. York Press, Timonium, 1995).

  5. Are Young Children With Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    PubMed Central

    McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation. Method We analyzed archival data collected from the parents of 36 children who received cochlear implantation (20 unilateral, 16 bilateral) before 24 months of age. The parents reported their children's word productions 12 months after implantation using the MacArthur Communicative Development Inventories: Words and Sentences (Fenson et al., 1993). We computed the number of words, out of 292 possible monosyllabic nouns, verbs, and adjectives, that each child was reported to say and calculated the average phonotactic probability, neighborhood density, and word frequency of the reported words. Results Spoken vocabulary size positively correlated with average phonotactic probability and negatively correlated with average neighborhood density, but only in children with bilateral CIs. Conclusion At 12 months postimplantation, children with bilateral CIs demonstrate sensitivity to statistical characteristics of words in the ambient spoken language akin to that reported for children with normal hearing during the early stages of lexical development. Children with unilateral CIs do not. PMID:25677929

  6. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    PubMed Central

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading. PMID:28690507

  7. A common neural system is activated in hearing non-signers to process French sign language and spoken French.

    PubMed

    Courtin, Cyril; Jobard, Gael; Vigneau, Mathieu; Beaucousin, Virginie; Razafimandimby, Annick; Hervé, Pierre-Yves; Mellet, Emmanuel; Zago, Laure; Petit, Laurent; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie

    2011-01-15

    We used functional magnetic resonance imaging to investigate the areas activated by signed narratives in non-signing subjects naïve to sign language (SL) and compared it to the activation obtained when hearing speech in their mother tongue. A subset of left hemisphere (LH) language areas activated when participants watched an audio-visual narrative in their mother tongue was activated when they observed a signed narrative. The inferior frontal (IFG) and precentral (Prec) gyri, the posterior parts of the planum temporale (pPT) and of the superior temporal sulcus (pSTS), and the occipito-temporal junction (OTJ) were activated by both languages. The activity of these regions was not related to the presence of communicative intent because no such changes were observed when the non-signers watched a muted video of a spoken narrative. Recruitment was also not triggered by the linguistic structure of SL, because the areas, except pPT, were not activated when subjects listened to an unknown spoken language. The comparison of brain reactivity for spoken and sign languages shows that SL has a special status in the brain compared to speech; in contrast to unknown oral language, the neural correlates of SL overlap LH speech comprehension areas in non-signers. These results support the idea that strong relationships exist between areas involved in human action observation and language, suggesting that the observation of hand gestures have shaped the lexico-semantic language areas as proposed by the motor theory of speech. As a whole, the present results support the theory of a gestural origin of language.

  8. Grammatical Processing of Spoken Language in Child and Adult Language Learners

    ERIC Educational Resources Information Center

    Felser, Claudia; Clahsen, Harald

    2009-01-01

    This article presents a selective overview of studies that have investigated auditory language processing in children and late second-language (L2) learners using online methods such as event-related potentials (ERPs), eye-movement monitoring, or the cross-modal priming paradigm. Two grammatical phenomena are examined in detail, children's and…

  9. Combining Speech Recognition/Natural Language Processing with 3D Online Learning Environments to Create Distributed Authentic and Situated Spoken Language Learning

    ERIC Educational Resources Information Center

    Jones, Greg; Squires, Todd; Hicks, Jeramie

    2008-01-01

    This article will describe research done at the National Institute of Multimedia in Education, Japan and the University of North Texas on the creation of a distributed Internet-based spoken language learning system that would provide more interactive and motivating learning than current multimedia and audiotape-based systems. The project combined…

  10. Combining Speech Recognition/Natural Language Processing with 3D Online Learning Environments to Create Distributed Authentic and Situated Spoken Language Learning

    ERIC Educational Resources Information Center

    Jones, Greg; Squires, Todd; Hicks, Jeramie

    2008-01-01

    This article will describe research done at the National Institute of Multimedia in Education, Japan and the University of North Texas on the creation of a distributed Internet-based spoken language learning system that would provide more interactive and motivating learning than current multimedia and audiotape-based systems. The project combined…

  11. Communicative Language Teaching in the Chinese Environment

    ERIC Educational Resources Information Center

    Hu, Wei

    2010-01-01

    In order to explore effective ways to develop Chinese English learners' communicative competence, this study first briefly reviews the advantages of communicative language teaching (CLT) method which widely practiced in the Western countries and analyzes in details its obstacles in Chinese classroom context. Then it offers guidelines for…

  12. How and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension: an ERP study.

    PubMed

    Li, Xiao-qing; Ren, Gui-qin

    2012-07-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of accentuation. Chinese spoken sentences were used as stimuli. The critical word in the carrier sentence was either semantically congruent or incongruent to the preceding sentence context. Meanwhile, the critical word was de-accented (DeAccent), generally accented (Accent), or greatly accented (GreatAccent). Results showed that, relative to semantically congruent words, the semantically incongruent word elicited a parietal-occipital N400 effect in the Accent condition and a broadly distributed N400 effect in the GreatAccent condition; however, no significant N400 effect was found in the DeAccent condition. Further onset analysis found that the N400 effect in the GreatAccent condition started around 50ms earlier than that in the Accent conditions. In addition, in the GreatAccent condition, the incongruent words also elicited an early negative effect in the window latency of 110-190ms after the acoustic onset of the critical word. The results indicated that, during on-line speech processing, accentuation can rapidly modulate temporally selective attention and consequently influence the depth or the speed of subsequent semantic processing; the effect of accentuation on attention allocation and semantic processing can change with the degree of accentuation gradually.

  13. Research on Spoken Dialogue Systems

    NASA Technical Reports Server (NTRS)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  14. Grammatical processing of spoken language in child and adult language learners.

    PubMed

    Felser, Claudia; Clahsen, Harald

    2009-06-01

    This article presents a selective overview of studies that have investigated auditory language processing in children and late second-language (L2) learners using online methods such as event-related potentials (ERPs), eye-movement monitoring, or the cross-modal priming paradigm. Two grammatical phenomena are examined in detail, children's and adults' processing of German plural inflections (Lück et al. Brain Res 1077:144-152, 2006; Hahne et al. J Cognitive Neurosci 18:121-134, 2006; Clahsen et al. J Child Language 34:601-622, 2007) and language learners' processing of filler-gap dependencies in English (Felser C, Roberts L Second Language Res 23:9-36, 2007; Roberts et al. J Psycholinguist Res 36:175-188, 2007). The results from these studies reveal clear differences between native and nonnative processing in both domains of grammar, suggesting that nonnative listeners rely less on grammatical parsing routines during processing than either child or adult native listeners. We also argue that factors such as slower processing speed or cognitive resource limitations only provide a partial account of our findings.

  15. Children with reading difficulties show differences in brain regions associated with orthographic processing during spoken language processing.

    PubMed

    Desroches, Amy S; Cone, Nadia E; Bolger, Donald J; Bitan, Tali; Burman, Douglas D; Booth, James R

    2010-10-14

    We explored the neural basis of spoken language deficits in children with reading difficulty, specifically focusing on the role of orthography during spoken language processing. We used functional magnetic resonance imaging (fMRI) to examine differences in brain activation between children with reading difficulties (aged 9-to-15 years) and age-matched children with typical achievement during an auditory rhyming task. Both groups showed activation in bilateral superior temporal gyri (BA 42 and 22), a region associated with phonological processing, with no significant between-group differences. Interestingly, typically achieving children, but not children with reading difficulties, showed activation of left fusiform cortex (BA 37), a region implicated in orthographic processing. Furthermore, this activation was significantly greater for typically achieving children compared to those with reading difficulties. These findings suggest that typical children automatically activate orthographic representations during spoken language processing, while those with reading difficulties do not. Follow-up analyses revealed that the intensity of the activation in the fusiform gyrus was associated with significantly stronger behavioral conflict effects in typically achieving children only (i.e., longer latencies to rhyming pairs with orthographically dissimilar endings than to those with identical orthographic endings; jazz-has vs. cat-hat). Finally, for reading disabled children, a positive correlation between left fusiform activation and nonword reading was observed, such that greater access to orthography was related to decoding ability. Taken together, the results suggest that the integration of orthographic and phonological processing is directly related to reading ability.

  16. Implicit learning of nonadjacent phonotactic dependencies in the perception of spoken language

    NASA Astrophysics Data System (ADS)

    McLennan, Conor T.; Luce, Paul A.

    2004-05-01

    We investigated the learning of nonadjacent phonotactic dependencies in adults. Following previous research examining learning of dependencies at a grammatical level (Gomez, 2002), we manipulated the co-occurrence of nonadjacent phonological segments within a spoken syllable. Each listener was exposed to consonant-vowel-consonant nonword stimuli produced by one of two phonological grammars. Both languages contained the same adjacent dependencies between the initial consonant-vowel and final vowel-consonant sequences but differed on the co-occurrences of initial and final consonants. The number of possible types of vowels that intervened between the initial and final consonants was also manipulated. Listeners learning of nonadjacent segmental dependencies were evaluated in a speeded recognition task in which they heard (1) old nonwords on which they had been trained, (2) new nonwords generated by the grammar on which they had been trained, and (3) new nonwords generated by the grammar on which they had not been trained. The results provide evidence for listener's sensitivity to nonadjacent dependencies. However, this sensitivity is manifested as an inhibitory competition effect rather than a facilitative effect on pattern processing. [Research supported by Research Grant No. R01 DC 0265802 from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health.

  17. Satisfaction with telemedicine for teaching listening and spoken language to children with hearing loss.

    PubMed

    Constantinescu, Gabriella

    2012-07-01

    Auditory-Verbal Therapy (AVT) is an effective early intervention for children with hearing loss. The Hear and Say Centre in Brisbane offers AVT sessions to families soon after diagnosis, and about 20% of the families in Queensland participate via PC-based videoconferencing (Skype). Parent and therapist satisfaction with the telemedicine sessions was examined by questionnaire. All families had been enrolled in the telemedicine AVT programme for at least six months. Their average distance from the Hear and Say Centre was 600 km. Questionnaires were completed by 13 of the 17 parents and all five therapists. Parents and therapists generally expressed high satisfaction in the majority of the sections of the questionnaire, e.g. most rated the audio and video quality as good or excellent. All parents felt comfortable or as comfortable as face-to-face when discussing matters with the therapist online, and were satisfied or as satisfied as face-to-face with their level and their child's level of interaction/rapport with the therapist. All therapists were satisfied or very satisfied with the telemedicine AVT programme. The results demonstrate the potential of telemedicine service delivery for teaching listening and spoken language to children with hearing loss in rural and remote areas of Australia.

  18. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners

  19. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates

    PubMed Central

    Petkov, Christopher I.; Jarvis, Erich D.

    2012-01-01

    Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species. PMID:22912615

  20. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals

    PubMed Central

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000

  1. Verbal Short-Term Memory Development and Spoken Language Outcomes in Deaf Children with Cochlear Implants

    PubMed Central

    Harris, Michael S.; Kronenberger, William G.; Gao, Sujuan; Hoen, Helena M.; Miyamoto, Richard T.; Pisoni, David B.

    2012-01-01

    Objectives Cochlear implants (CIs) help many deaf children achieve near normal speech and language (S/L) milestones. Nevertheless, high levels of unexplained variability in S/L outcomes are limiting factors in improving the effectiveness of CIs in deaf children. The objective of this study was to longitudinally assess the role of verbal short-term memory (STM) and working memory (WM) capacity as a progress-limiting source of variability in S/L outcomes following CI in children. Design Longitudinal study of 66 children with CIs for pre-lingual severe-to-profound hearing loss. Outcome measures included performance on Digit Span Forward (DSF), Digit Span Backward (DSB), and four conventional S/L measures that examined spoken word recognition (PBK), receptive vocabulary (PPVT), sentence recognition skills (HINT), and receptive and expressive language functioning (CELF). Results Growth curves for DSF and DSB in the CI sample over time were comparable in slope, but consistently lagged in magnitude relative to norms for normal-hearing peers of the same age. For DSF and DSB, 50.5% and 44.0%, respectively, of the CI sample scored >1 SD below the normative mean for raw scores across all ages. The first (baseline) DSF score significantly predicted all endpoint scores for the four S/L measures, and DSF slope (growth) over time predicted CELF scores. DSF baseline and slope accounted for an additional 13%–31% of variance in S/L scores after controlling for conventional predictor variables such as: chronological age at time of testing, age at time of implantation, communication mode (AOC vs. TC), and maternal education. Only DSB baseline scores predicted endpoint language scores on PPVT and CELF. DSB slopes were not significantly related to any endpoint S/L measures. DSB baseline scores and slopes taken together accounted for an additional 4%–19% of variance in S/L endpoint measures after controlling for the conventional predictor variables. Conclusions Verbal STM/WM scores

  2. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.

  3. Assessing comprehension of spoken language in nonspeaking children with cerebral palsy: application of a newly developed computer-based instrument.

    PubMed

    Geytenbeek, Joke J M; Heim, Margriet M J; Vermeulen, R Jeroen; Oostrom, Kim J

    2010-06-01

    This paper describes the development of an instrument to assess comprehension of spoken language in children with severe cerebral palsy (CP) who cannot speak, and for whom standard language assessment measures are not appropriate due to severe motor impairment. This instrument, the Computer-Based instrument for Low motor Language Testing (C-BiLLT), was administered to 42 children without disabilities (aged 14 months to 60 months) and to 18 children with severe CP (age 19 months to 71 months). Preliminary data showed that the instrument was acceptable to the children. Convergent validity was investigated by correlating C-BiLLT scores with test results on the well-established Reynell Developmental Language Scales (RDLS). Clinical implications and recommendations for future research are discussed.

  4. On Chinese Loan Words from English Language

    ERIC Educational Resources Information Center

    Yan, Yun; Deng, Tianbai

    2009-01-01

    In the recent twenty years, with China's reform and opening policy to the outside world, there is a sharp increase in English loan words in Chinese. On the one hand, it demonstrates that China's soft power has been booming up. But on the other hand, some language pollution in the meanwhile is caused by non-standard use of loan words in Chinese.…

  5. An analysis of spoken language expression during simulated emergency call triage.

    PubMed

    Morimura, Naoto; Ishikawa, Junya; Kitsuta, Yoichi; Nakamura, Kyota; Anze, Masaki; Sugiyama, Mitsugi; Sakamoto, Tetsuya

    2005-04-01

    Volunteer citizens were recruited to perform simulated emergency calls, and the expressions and content of these telephone calls were analysed to examine risk factors associated with the success or failure of communication. Six physicians played the role of patients who had various symptoms, such as cerebral stroke and ischaemic heart disease. Eighty-four volunteer citizens made simulated emergency calls. Physicians at a simulated call centre communicated with each caller regarding the patient's body position, respiratory condition, and cardiovascular status. Details of the telephone communications were analysed to determine if communication was successful. Telephone communications that resulted in the correct understanding of a simulated patient's condition were as follows: 60.2% of sessions (32/50) on whether or not a patient was breathing; 47.8% of sessions (22/46) on whether or not a patient had a pulse (carotid or radial artery); and 86.2% of sessions (56/65) on patient body position. How a simulated dispatcher verbally expressed questions was the most influential factor in the success of communication regarding respiratory condition and body position. Avoiding vague language, giving specific instructions for checking a patient, and finally reminding the caller to perform the explained procedures led to a high rate of successful communications. Various spoken expressions by simulated dispatchers in confirming patient pulse did not have any impact on the success or failure of communications. In developing a 'protocol for emergency call triage' to achieve a high rate of successful emergency communications, an analysis of expressions using simulated patients is useful.

  6. "CLASS Professional Standards" for K-12 Chinese Language Teachers

    ERIC Educational Resources Information Center

    Lee, Lucy C.; Lin, Yu-Lan; Su, Chih-Wen

    2007-01-01

    "CLASS Professional Standards" is a resource for Chinese teachers, foreign language specialists, school administrators, parents, and policy makers who recognize the importance of Chinese cultures taught by professional teachers of Chinese. The release of the book also marks the celebration of the Chinese Language Association of…

  7. Teaching and Learning Chinese: Heritage Language Classroom Discourse in Montreal

    ERIC Educational Resources Information Center

    Curdt-Christiansen, Xiao Lan

    2006-01-01

    This paper explores issues of teaching and learning Chinese as a heritage language in a Chinese heritage language school, the Zhonguo Saturday School, in Montreal, Quebec. With a student population of more than 1000, this school is the largest of the eight Chinese Heritage Language schools in Montreal. Students participating in this study were…

  8. Reliability and validity of the C-BiLLT: a new instrument to assess comprehension of spoken language in young children with cerebral palsy and complex communication needs.

    PubMed

    Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J

    2014-09-01

    In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.

  9. Understanding of spoken language under challenging listening conditions in younger and older listeners: a combined behavioral and electrophysiological study.

    PubMed

    Getzmann, Stephan; Falkenstein, Michael

    2011-09-30

    Numerous studies suggested an age-related decline in speech perception under difficult listening conditions. Here, spoken language understanding of two age groups of listeners was investigated in a naturalistic "stock price monitoring" task. Stock prices of listed companies were simultaneously recited by three speakers at different positions in space and presented via headphones to 14 younger and 14 older listeners (age ranges 19-25 and 54-64 years, respectively). The listeners had to respond when prices of target companies exceeded a specific value, but to ignore all other prices as well as beep sounds randomly interspersed within the stock prices. Older listeners did not produce more missing responses, or longer response times than younger listeners. However, differences in event-related potentials indicated a reduced parietal P3b of older, relative to younger, listeners. Separate analyses for those listeners who performed relatively high or low in the behavioral task revealed a right-frontal P3a that was pronounced especially in the group of high-performing older listeners. Correlational analyses indicated a direct relationship between P3a amplitude and spoken language comprehension in older, but not younger, listeners. Furthermore, younger (especially, low-performing) listeners showed a more pronounced P2 on irrelevant beep sounds than older listeners. These subtle differences in cortical processing between age groups suggest that high performance of older middle-aged listeners in demanding listening situations is associated with increased engagement of frontal brain areas, and thus the allocation of mental resources for compensation of potential declines in spoken language understanding.

  10. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    ERIC Educational Resources Information Center

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  11. Reply to David Kemmerer's "A Critique of Mark D. Allen's "The Preservation of Verb Subcategory Knowledge in a Spoken Language Comprehension Deficit""

    ERIC Educational Resources Information Center

    Allen, Mark D.; Owens, Tyler E.

    2008-01-01

    Allen [Allen, M. D. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. "Brain and Language, "95, 255-264] presents evidence from a single patient, WBN, to motivate a theory of lexical processing and representation in which syntactic information may be encoded and retrieved independently of semantic…

  12. Will They Catch Up? The Role of Age at Cochlear Implantation In the Spoken Language Development of Children with Severe-Profound Hearing Loss

    PubMed Central

    Nicholas, Johanna Grant; Geers, Ann E.

    2010-01-01

    Purpose: We examined the benefits of younger cochlear implantation, longer cochlear implant use, and greater pre-implant aided hearing to spoken language at 3.5 and 4.5 years of age. Method: Language samples were obtained at age 3.5 and 4.5 years from 76 children implanted by their third birthday. Hierarchical Linear Modeling (HLM) was employed to identify characteristics associated with spoken language outcomes at the two test ages. The Preschool Language Scale was used to compare skills with those of hearing age-mates at age 4.5. Results: Expected language scores increased with younger age at implant and lower pre-implant thresholds, even when compared at the same duration of implant use. Expected Preschool Language Scale (PLS) scores of the children who received the implant at the youngest ages reached those of hearing age-mates by 4.5 years, but those implanted after 24 months of age did not “catch up” with hearing peers. Conclusions: Children who received a cochlear implant before a substantial delay in spoken language developed (i.e., between 12-16 months) were more likely to achieve age-appropriate spoken language. These results favor cochlear implantation before 24 months of age, especially for children with better ear aided pure tone average thresholds greater than 65 dB prior to surgery. PMID:17675604

  13. Compound nouns in spoken language production by speakers with aphasia compared to neurologically healthy speakers: an exploratory study.

    PubMed

    Eiesland, Eli Anne; Lind, Marianne

    2012-03-01

    Compounds are words that are made up of at least two other words (lexemes), featuring lexical and syntactic characteristics and thus particularly interesting for the study of language processing. Most studies of compounds and language processing have been based on data from experimental single word production and comprehension tasks. To enhance the ecological validity of morphological processing research, data from other contexts, such as discourse production, need to be considered. This study investigates the production of nominal compounds in semi-spontaneous spoken texts by a group of speakers with fluent types of aphasia compared to a group of neurologically healthy speakers. The speakers with aphasia produce significantly fewer nominal compound types in their texts than the non-aphasic speakers, and the compounds they produce exhibit fewer different types of semantic relations than the compounds produced by the non-aphasic speakers. The results are discussed in relation to theories of language processing.

  14. The Chinese Immigrant: Language and Cultural Concerns

    ERIC Educational Resources Information Center

    Chan, Ivy

    1976-01-01

    Administrators and teachers must be more understanding of the linguistic and cultural problems that Chinese students in Canada face. They are potentially very good English language learners. Their families emphasize academic achievement. If special attention is given to their problems, they will be successful students. (CFM)

  15. Articulation in Programs of Chinese Language.

    ERIC Educational Resources Information Center

    Fenn, Henry C.

    1970-01-01

    The teaching of the Chinese language in the United States needs first to depart from the classical attitude that the sole goal is research, and to include among its objectives the occupational needs of all types of learners. To meet the problem of student "nomadism" at home and abroad, there should be certification of all transfers in terms of…

  16. Chunk Learning and the Development of Spoken Discourse in a Japanese as a Foreign Language Classroom

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2007-01-01

    This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…

  17. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2012-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…

  18. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2012-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…

  19. KANNADA--A CULTURAL INTRODUCTION TO THE SPOKEN STYLES OF THE LANGUAGE.

    ERIC Educational Resources Information Center

    KRISHNAMURTHI, M.G.; MCCORMACK, WILLIAM

    THE TWENTY GRADED UNITS IN THIS TEXT CONSTITUTE AN INTRODUCTION TO BOTH INFORMAL AND FORMAL SPOKEN KANNADA. THE FIRST TWO UNITS PRESENT THE KANNADA MATERIAL IN PHONETIC TRANSCRIPTION ONLY, WITH KANNADA SCRIPT GRADUALLY INTRODUCED FROM UNIT III ON. A TYPICAL LESSON-UNIT INCLUDES--(1) A DIALOG IN PHONETIC TRANSCRIPTION AND ENGLISH TRANSLATION, (2)…

  20. Are Phonological Representations of Printed and Spoken Language Isomorphic? Evidence from the Restrictions on Unattested Onsets

    ERIC Educational Resources Information Center

    Berent, Iris

    2008-01-01

    Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether…

  1. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    PubMed

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  2. Spoken language comprehension of phrases, simple and compound-active sentences in non-speaking children with severe cerebral palsy.

    PubMed

    Geytenbeek, Joke J M; Heim, Margriet J M; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J

    2015-07-01

    Children with severe cerebral palsy (CP) (i.e. 'non-speaking children with severely limited mobility') are restricted in many domains that are important to the acquisition of language. To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. From an original sample of 87 non-speaking children with severe CP, 68 passed the pre-test (i.e. they matched at least five spoken words to the corresponding objects) of a specifically developed computer-based instrument for low motor language testing (C-BiLLT), admitting them to the actual C-BiLLT computer test. As a result, the present study included 68 children with severe CP (35 boys, 33 girls; mean age 6;11 years, SD 3;0 years; age range 1;9-11;11 years) who were investigated with the C-BiLLT for comprehension of different sentence types: phrases, simple active sentences (with one or two arguments) and compound sentences. The C-BiLLT provides norm data of typically developing (TD) children (1;6-6;6 years). Binomial logistic regression analyses were used to compare the percentage correct of each sentence type in children with severe CP with that in TD children (subdivided into age groups) and to compare percentage correct within the CP subtypes. Sentence comprehension in non-speaking children with severe CP followed the developmental trajectory of TD children, but at a much slower rate; nevertheless, they were still developing up to at least age 12 years. Delays in sentence type comprehension increased with sentence complexity and showed a large variability between individual children and between subtypes of CP. Comprehension of simple and syntactically more complex sentences were significantly better in children with dyskinetic CP than in children with spastic CP. Of the children with dyskinetic CP, 10-13% showed comprehension of simple and compound sentences within the percentage correct of TD children, as opposed to none of the children with spastic CP. In non

  3. Brain-based translation: fMRI decoding of spoken words in bilinguals reveals language-independent semantic representations in anterior temporal lobe.

    PubMed

    Correia, João; Formisano, Elia; Valente, Giancarlo; Hausfeld, Lars; Jansma, Bernadette; Bonte, Milene

    2014-01-01

    Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.

  4. Fremdsprachenunterricht als Kommunikationsprozess (Foreign Language Teaching as a Communicative Process). Language Centre News, No. 1. Focus on Spoken Language.

    ERIC Educational Resources Information Center

    Butzkamm, Wolfgang

    Teaching, as a communicative process, ranges between purely message-oriented communication (the goal) and purely language-oriented communication (a means). Classroom discourse ("Close the window", etc.) is useful as a drill but is also message-oriented. Skill in message-oriented communication is acquired only through practice in this kind of…

  5. Fremdsprachenunterricht als Kommunikationsprozess (Foreign Language Teaching as a Communicative Process). Language Centre News, No. 1. Focus on Spoken Language.

    ERIC Educational Resources Information Center

    Butzkamm, Wolfgang

    Teaching, as a communicative process, ranges between purely message-oriented communication (the goal) and purely language-oriented communication (a means). Classroom discourse ("Close the window", etc.) is useful as a drill but is also message-oriented. Skill in message-oriented communication is acquired only through practice in this kind of…

  6. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials.

    PubMed

    Boudewyn, Megan A; Gordon, Peter C; Long, Debra; Polse, Lara; Swaab, Tamara Y

    2012-06-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., "Luckily Ben had picked up some salt and pepper/basil", preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition.

  7. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    PubMed Central

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  8. Heritage Learners in the Chinese Language Classroom: Home Background

    ERIC Educational Resources Information Center

    Xiao, Yun

    2006-01-01

    Studies from information-processing and language comprehension research have reported that background knowledge facilitates reading and writing. By comparing Chinese language development of heritage students who had home background in Chinese language and culture with those who did not, this study found that heritage learners did significantly…

  9. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    PubMed

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. S-ESL Spoken English Test (The Standard English as a Second Language Spoken English Test), Tests A, B, C [and] S-ESL Grammar Test (The Standard English as a Second Language Grammar Test), Forms O, A, B, C [and] S-ESL Vocabulary Test (The Standard English as a Second Language Vocabulary Test), Forms O, A, B, C.

    ERIC Educational Resources Information Center

    Pedersen, Elray L.

    The three tests that make up this instrument are designed to assess the oral communication, grammatical fluency, and vocabulary development of students for whom English is a second language. The spoken English test comes in two versions: one with 90 items on a cassette tape, the other with 90 items to be read aloud by the examiner. Each version is…

  11. A Glimpse of the Chinese Language: Peking's Language Reforms and the Teaching of Chinese in the United States.

    ERIC Educational Resources Information Center

    Shieh, Francis

    This paper is intended to provide "an informative general survey" for those persons interested in Chinese, a language used by 25 percent of the world's population. One of the earliest languages in recorded form, written Chinese has both classical and modern forms. Language reforms in Peking, designed to standardize and simplify spoken…

  12. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    ERIC Educational Resources Information Center

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  13. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    ERIC Educational Resources Information Center

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  14. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    ERIC Educational Resources Information Center

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  15. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    ERIC Educational Resources Information Center

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  16. Self-Ratings of Spoken Language Dominance: A Multilingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2012-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…

  17. Self-Ratings of Spoken Language Dominance: A Multilingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2012-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…

  18. Literacy Affects Spoken Language in a Non-Linguistic Task: An ERP Study

    PubMed Central

    Perre, Laetitia; Bertrand, Daisy; Ziegler, Johannes C.

    2011-01-01

    It is now commonly accepted that orthographic information influences spoken word recognition in a variety of laboratory tasks (lexical decision, semantic categorization, gender decision). However, it remains a hotly debated issue whether or not orthography would influence normal word perception in passive listening. That is, the argument has been made that orthography might only be activated in laboratory tasks that require lexical or semantic access in some form or another. It is possible that these rather “unnatural” tasks invite participants to use orthographic information in a strategic way to improve task performance. To put the strategy account to rest, we conducted an event-related brain potential (ERP) study, in which participants were asked to detect a 500-ms-long noise burst that appeared on 25% of the trials (Go trials). In the NoGo trials, we presented spoken words that were orthographically consistent or inconsistent. Thus, lexical and/or semantic processing was not required in this task and there was no strategic benefit in computing orthography to perform this task. Nevertheless, despite the non-linguistic nature of the task, we replicated the consistency effect that has been previously reported in lexical decision and semantic tasks (i.e., inconsistent words produce more negative ERPs than consistent words as early as 300 ms after the onset of the spoken word). These results clearly suggest that orthography automatically influences word perception in normal listening even if there is no strategic benefit to do so. The results are explained in terms of orthographic restructuring of phonological representations. PMID:22025917

  19. A History Reclaimed. An Annotated Bibliography of Chinese Language Materials on the Chinese of America.

    ERIC Educational Resources Information Center

    Lai, Him Mark

    Because past frames of reference and perspectives on Chinese Americans have relied mainly on English language sources, they seldom reflected the attitudes and experiences of the Chinese themselves. This bibliography attempts to remedy that situation by listing over 1,500 available works in the Chinese language which can be found in libraries and…

  20. Developing Accuracy and Fluency in Spoken English of Chinese EFL Learners

    ERIC Educational Resources Information Center

    Wang, Zhiqin

    2014-01-01

    Chinese EFL learners may have difficulty in speaking fluent and accurate English, for their speaking competence are likely to be influenced by cognitive, linguistic and affective factors. With the aim to enhance those learners' oral proficiency, this paper first discusses three effective models of teaching English speaking, and then proposes a…

  1. Let's all speak together! Exploring the masking effects of various languages on spoken word identification in multi-linguistic babble.

    PubMed

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a -5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a -5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At -5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.

  2. Let's All Speak Together! Exploring the Masking Effects of Various Languages on Spoken Word Identification in Multi-Linguistic Babble

    PubMed Central

    Gautreau, Aurore; Hoen, Michel; Meunier, Fanny

    2013-01-01

    This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a −5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a −5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At −5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level. PMID:23785442

  3. Will They Catch Up? The Role of Age at Cochlear Implantation in the Spoken Language Development of Children with Severe to Profound Hearing Loss

    ERIC Educational Resources Information Center

    Nicholas, Johanna Grant; Geers, Ann E.

    2007-01-01

    Purpose: The authors examined the benefits of younger cochlear implantation, longer cochlear implant use, and greater pre-implant aided hearing to spoken language at 3.5 and 4.5 years of age. Method: Language samples were obtained at ages 3.5 and 4.5 years from 76 children who received an implant by their 3rd birthday. Hierarchical linear modeling…

  4. Will They Catch Up? The Role of Age at Cochlear Implantation in the Spoken Language Development of Children with Severe to Profound Hearing Loss

    ERIC Educational Resources Information Center

    Nicholas, Johanna Grant; Geers, Ann E.

    2007-01-01

    Purpose: The authors examined the benefits of younger cochlear implantation, longer cochlear implant use, and greater pre-implant aided hearing to spoken language at 3.5 and 4.5 years of age. Method: Language samples were obtained at ages 3.5 and 4.5 years from 76 children who received an implant by their 3rd birthday. Hierarchical linear modeling…

  5. Simulating Language-specific and Language-general Effects in a Statistical Learning Model of Chinese Reading

    PubMed Central

    Yang, Jianfeng; McCandliss, Bruce D.; Shu, Hua; Zevin, Jason D.

    2009-01-01

    Many theoretical models of reading assume that different writing systems require different processing assumptions. For example, it is often claimed that print-to-sound mappings in Chinese are not represented or processed sub-lexically. We present a connectionist model that learns the print to sound mappings of Chinese characters using the same functional architecture and learning rules that have been applied to English. The model predicts an interaction between item frequency and print-to-sound consistency analogous to what has been found for English, as well as a language-specific regularity effect particular to Chinese. Behavioral naming experiments using the same test items as the model confirmed these predictions. Corpus properties and the analyses of internal representations that evolved over training revealed that the model was able to capitalize on information in “phonetic components” – sub-lexical structures of variable size that convey probabilistic information about pronunciation. The results suggest that adult reading performance across very different writing systems may be explained as the result of applying the same learning mechanisms to the particular input statistics of writing systems shaped by both culture and the exigencies of communicating spoken language in a visual medium. PMID:20161189

  6. Fast Mapping Semantic Features: Performance of Adults with Normal Language, History of Disorders of Spoken and Written Language, and Attention Deficit Hyperactivity Disorder on a Word-Learning Task

    ERIC Educational Resources Information Center

    Alt, Mary; Gutmann, Michelle L.

    2009-01-01

    Purpose: This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Methods: Sixty-eight adults were required to associate a novel object with a novel label, and then…

  7. Fast Mapping Semantic Features: Performance of Adults with Normal Language, History of Disorders of Spoken and Written Language, and Attention Deficit Hyperactivity Disorder on a Word-Learning Task

    ERIC Educational Resources Information Center

    Alt, Mary; Gutmann, Michelle L.

    2009-01-01

    Purpose: This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Methods: Sixty-eight adults were required to associate a novel object with a novel label, and then…

  8. Looking Chinese and Learning Chinese as a Heritage Language: The Role of Habitus

    ERIC Educational Resources Information Center

    Mu, Guanglun Michael

    2016-01-01

    The identity-language link has been widely recognised. In Heritage Language research, the complex entanglement between Chinese ethnic identity and Chinese Heritage Language has gained increasing attention in recent decades. Both social psychological and poststructural schools have offered meaningful insights into this field, but have also received…

  9. Looking Chinese and Learning Chinese as a Heritage Language: The Role of Habitus

    ERIC Educational Resources Information Center

    Mu, Guanglun Michael

    2016-01-01

    The identity-language link has been widely recognised. In Heritage Language research, the complex entanglement between Chinese ethnic identity and Chinese Heritage Language has gained increasing attention in recent decades. Both social psychological and poststructural schools have offered meaningful insights into this field, but have also received…

  10. English Language Ideologies in the Chinese Foreign Language Education Policies: A World-System Perspective

    ERIC Educational Resources Information Center

    Pan, Lin

    2011-01-01

    This paper investigates the Chinese state's English language ideologies as reflected in official Chinese foreign language education policies (FLEP). It contends that the Chinese FLEP not only indicate a way by which the state gains consent, maintains cultural governance, and exerts hegemony internally, but also shows the traces of the combined…

  11. Language Attitudes and Heritage Language Maintenance among Chinese Immigrant Families in the USA

    ERIC Educational Resources Information Center

    Zhang, Donghui; Slaughter-Defoe, Diana T.

    2009-01-01

    This qualitative study investigates attitudes toward heritage language (HL) maintenance among Chinese immigrant parents and their second-generation children. Specific attention is given to exploring (1) what attitudes are held by the Chinese parents and children toward Chinese language maintenance in the USA, (2) what efforts are engaged in by the…

  12. English Language Ideologies in the Chinese Foreign Language Education Policies: A World-System Perspective

    ERIC Educational Resources Information Center

    Pan, Lin

    2011-01-01

    This paper investigates the Chinese state's English language ideologies as reflected in official Chinese foreign language education policies (FLEP). It contends that the Chinese FLEP not only indicate a way by which the state gains consent, maintains cultural governance, and exerts hegemony internally, but also shows the traces of the combined…

  13. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    PubMed

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Using the readiness potential of button-press and verbal response within spoken language processing.

    PubMed

    Jansen, Stefanie; Wesselmeier, Hendrik; de Ruiter, Jan P; Mueller, Horst M

    2014-07-30

    Even though research in turn-taking in spoken dialogues is now abundant, a typical EEG-signature associated with the anticipation of turn-ends has not yet been identified until now. The purpose of this study was to examine if readiness potentials (RP) can be used to study the anticipation of turn-ends by using it in a motoric finger movement and articulatory movement task. The goal was to determine the preconscious onset of turn-end anticipation in early, preconscious turn-end anticipation processes by the simultaneous registration of EEG measures (RP) and behavioural measures (anticipation timing accuracy, ATA). For our behavioural measures, we used both button-press and verbal response ("yes"). In the experiment, 30 subjects were asked to listen to auditorily presented utterances and press a button or utter a brief verbal response when they expected the end of the turn. During the task, a 32-channel-EEG signal was recorded. The results showed that the RPs during verbal- and button-press-responses developed similarly and had an almost identical time course: the RP signals started to develop 1170 vs. 1190 ms before the behavioural responses. Until now, turn-end anticipation is usually studied using behavioural methods, for instance by measuring the anticipation timing accuracy, which is a measurement that reflects conscious behavioural processes and is insensitive to preconscious anticipation processes. The similar time course of the recorded RP signals for both verbal- and button-press responses provide evidence for the validity of using RPs as an online marker for response preparation in turn-taking and spoken dialogue research. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. The Language Problems of Chinese Children in America.

    ERIC Educational Resources Information Center

    Chao, Yuen Ren

    The author encourages American parents of Chinese ancestry to use the Chinese language with their children whenever this is possible. It is a major opportunity for children to learn an important second language while they are young enough to do so naturally, without conscious effort. Although it is more difficult when parents are speakers of…

  16. Teacher Education Curriculum for Teaching Chinese as a Foreign Language

    ERIC Educational Resources Information Center

    Attaran, Mohammad; Yishuai, Hu

    2015-01-01

    The worldwide growing demand of CFL (Chinese as a Foreign Language) teachers has many implications for both curriculum development and teacher education. Much evidence has shown more in-depth research is needed in the field of teaching Chinese as a foreign language (Tsung & Cruickshank, 2011). Studying in-service teachers' experience in…

  17. Chinese Language Education in Europe: The Confucius Institutes

    ERIC Educational Resources Information Center

    Starr, Don

    2009-01-01

    This article explores the background to the Chinese government's decision to embark on a programme of promoting the study of Chinese language and culture overseas. This includes the impact of Joseph Nye's concept of "soft power" in China, ownership of the national language, the Confucius connection, and how these factors interact with…

  18. The Role of Parents in Chinese Heritage-Language Schools

    ERIC Educational Resources Information Center

    Li, Mengying

    2005-01-01

    This paper looks at the Chinese heritage language schools in metropolitan Phoenix area and examines what role parents of the students play in the schools. Based on semi-structured interviews, class observations and publication from the local Chinese schools, this study shows that although Chinese schools have benefited from the support of parents…

  19. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception

    PubMed Central

    Liebenthal, Einat; Silbersweig, David A.; Stern, Emily

    2016-01-01

    Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala—a subcortical center for emotion perception—are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states. PMID

  20. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception.

    PubMed

    Liebenthal, Einat; Silbersweig, David A; Stern, Emily

    2016-01-01

    Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.

  1. Foreign Language Authoring Systems: Judith Frommer's MacLang Spoken Here.

    ERIC Educational Resources Information Center

    Frommer, Judith

    1987-01-01

    MacLang, a user-friendly authoring system for the Macintosh computer, helps second language teachers prepare and tailor computer exercises and activities; is usable with any Roman-alphabet language, Russian, and Greek; gives students intelligent feedback; and allows students to control computer activity in order to individualize learning.…

  2. Do Spoken Nonword and Sentence Repetition Tasks Discriminate Language Impairment in Children with an ASD?

    ERIC Educational Resources Information Center

    Harper-Hill, Keely; Copland, David; Arnott, Wendy

    2013-01-01

    The primary aim of this paper was to investigate heterogeneity in language abilities of children with a confirmed diagnosis of an ASD (N = 20) and children with typical development (TD; N = 15). Group comparisons revealed no differences between ASD and TD participants on standard clinical assessments of language ability, reading ability or…

  3. Propositional Density in Spoken and Written Language of Czech-Speaking Patients with Mild Cognitive Impairment

    ERIC Educational Resources Information Center

    Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán

    2016-01-01

    Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…

  4. The Sound of Motion in Spoken Language: Visual Information Conveyed by Acoustic Properties of Speech

    ERIC Educational Resources Information Center

    Shintel, Hadas; Nusbaum, Howard C.

    2007-01-01

    Language is generally viewed as conveying information through symbols whose form is arbitrarily related to their meaning. This arbitrary relation is often assumed to also characterize the mental representations underlying language comprehension. We explore the idea that visuo-spatial information can be analogically conveyed through acoustic…

  5. Foreign Language Authoring Systems: Judith Frommer's MacLang Spoken Here.

    ERIC Educational Resources Information Center

    Frommer, Judith

    1987-01-01

    MacLang, a user-friendly authoring system for the Macintosh computer, helps second language teachers prepare and tailor computer exercises and activities; is usable with any Roman-alphabet language, Russian, and Greek; gives students intelligent feedback; and allows students to control computer activity in order to individualize learning.…

  6. Task-Oriented Spoken Dialog System for Second-Language Learning

    ERIC Educational Resources Information Center

    Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    This paper introduces a Dialog-Based Computer Assisted second-Language Learning (DB-CALL) system using task-oriented dialogue processing technology. The system promotes dialogue with a second-language learner for a specific task, such as purchasing tour tickets, ordering food, passing through immigration, etc. The dialog system plays a role of a…

  7. Propositional Density in Spoken and Written Language of Czech-Speaking Patients with Mild Cognitive Impairment

    ERIC Educational Resources Information Center

    Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán

    2016-01-01

    Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…

  8. Shy and Soft-Spoken: Shyness, Pragmatic Language, and Socioemotional Adjustment in Early Childhood

    ERIC Educational Resources Information Center

    Coplan, Robert J.; Weeks, Murray

    2009-01-01

    The goal of this study was to examine the moderating role of pragmatic language in the relations between shyness and indices of socio-emotional adjustment in an unselected sample of early elementary school children. In particular, we sought to explore whether pragmatic language played a protective role for shy children. Participants were n = 167…

  9. The Sound of Motion in Spoken Language: Visual Information Conveyed by Acoustic Properties of Speech

    ERIC Educational Resources Information Center

    Shintel, Hadas; Nusbaum, Howard C.

    2007-01-01

    Language is generally viewed as conveying information through symbols whose form is arbitrarily related to their meaning. This arbitrary relation is often assumed to also characterize the mental representations underlying language comprehension. We explore the idea that visuo-spatial information can be analogically conveyed through acoustic…

  10. Talk or Chat? Chatroom and Spoken Interaction in a Language Classroom

    ERIC Educational Resources Information Center

    Hamano-Bunce, Douglas

    2011-01-01

    This paper describes a study comparing chatroom and face-to-face oral interaction for the purposes of language learning in a tertiary classroom in the United Arab Emirates. It uses transcripts analysed for Language Related Episodes, collaborative dialogues, thought to be externally observable examples of noticing in action. The analysis is…

  11. Language Ability and Verbal and Nonverbal Executive Functioning in Deaf Students Communicating in Spoken English

    ERIC Educational Resources Information Center

    Remine, Maria D.; Care, Esther; Brown, P. Margaret

    2008-01-01

    The internal use of language during problem solving is considered to play a key role in executive functioning. This role provides a means for self-reflection and self-questioning during the formation of rules and plans and a capacity to control and monitor behavior during problem-solving activity. Given that increasingly sophisticated language is…

  12. Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

    PubMed

    Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R

    This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant

  13. Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation

    PubMed Central

    Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming

    2017-01-01

    Objectives: This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. Design: The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6–10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13–19 years. Results: Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], −0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, −0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, −0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, −0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure

  14. Substance use in high school students in New South Wales, Australia, in relation to language spoken at home.

    PubMed

    Chen, J; Bauman, A; Rissel, C; Tang, K C; Forero, R; Flaherty, B

    2000-01-01

    To examine for the first time adolescent substance use by ethnicity, given the high proportion of migrants from non-English-speaking countries in New South Wales, (NSW), Australia. Data from four surveys of NSW secondary school students in 1983, 1986, 1989, and 1992 were used for this analysis. The prevalence of substance use by whether English was spoken at home was stratified by sex and age using data from the most recent survey year. Adjusted odds ratios and 95% confidence intervals were produced by simultaneous logistic regression, adjusting for sex, age group, and the interaction term of sex and age for each of these substances, and for each survey year separately. Data from 1989 and 1992 were pooled together to examine rates of substance use by ethnic subgroups which reflect migration patterns. The prevalence of smoking and alcohol and illicit drug use was consistently lower among NSW adolescents speaking a language other than English at home, compared with those speaking English at home in all survey years. Only the prevalence of solvent sniffing was higher among younger adolescents speaking a language other than English at home. Students from Southeast Asia showed consistently lower rates of usage of all substances compared to all other groups. There may be different opportunities for the prevention of adolescent substance use among native English speakers to be gained from non-English-speaking cultures.

  15. Research among Learners of Chinese as a Foreign Language. Chinese Language Teachers Association Monograph Series. Volume IV

    ERIC Educational Resources Information Center

    Everson, Michael E., Ed.; Shen, Helen H., Ed.

    2010-01-01

    Cutting-edge in its approach and international in its authorship, this fourth monograph in a series sponsored by the Chinese Language Teachers Association features eight research studies that explore a variety of themes, topics, and perspectives important to a variety of stakeholders in the Chinese language learning community. Employing a wide…

  16. Validation of the "Chinese Language Classroom Learning Environment Inventory" for Investigating the Nature of Chinese Language Classrooms

    ERIC Educational Resources Information Center

    Lian, Chua Siew; Wong, Angela F. L.; Der-Thanq, Victor Chen

    2006-01-01

    The Chinese Language Classroom Environment Inventory (CLCEI) is a bilingual instrument developed for use in measuring students' and teachers' perceptions toward their Chinese Language classroom learning environments in Singapore secondary schools. The English version of the CLCEI was customised from the English version of the "What is…

  17. Research among Learners of Chinese as a Foreign Language. Chinese Language Teachers Association Monograph Series. Volume IV

    ERIC Educational Resources Information Center

    Everson, Michael E., Ed.; Shen, Helen H., Ed.

    2010-01-01

    Cutting-edge in its approach and international in its authorship, this fourth monograph in a series sponsored by the Chinese Language Teachers Association features eight research studies that explore a variety of themes, topics, and perspectives important to a variety of stakeholders in the Chinese language learning community. Employing a wide…

  18. Project ASPIRE: Spoken Language Intervention Curriculum for Parents of Low-socioeconomic Status and Their Deaf and Hard-of-Hearing Children.

    PubMed

    Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen

    2016-02-01

    To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged < 4.5 years, hearing loss of ≥ 30 dB, between 500 and 4000 Hz, using at least one amplification device with adequate amplification (hearing aid, cochlear implant, osseo-integrated device). Behavioral. Caregiver-directed educational intervention curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.

  19. Bilateral Versus Unilateral Cochlear Implants in Children: A Study of Spoken Language Outcomes

    PubMed Central

    Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Objectives: Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. Design: The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children’s intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Results: Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of

  20. Bilateral versus unilateral cochlear implants in children: a study of spoken language outcomes.

    PubMed

    Sarant, Julia; Harris, David; Bennet, Lisa; Bant, Sharyn

    2014-01-01

    Although it has been established that bilateral cochlear implants (CIs) offer additional speech perception and localization benefits to many children with severe to profound hearing loss, whether these improved perceptual abilities facilitate significantly better language development has not yet been clearly established. The aims of this study were to compare language abilities of children having unilateral and bilateral CIs to quantify the rate of any improvement in language attributable to bilateral CIs and to document other predictors of language development in children with CIs. The receptive vocabulary and language development of 91 children was assessed when they were aged either 5 or 8 years old by using the Peabody Picture Vocabulary Test (fourth edition), and either the Preschool Language Scales (fourth edition) or the Clinical Evaluation of Language Fundamentals (fourth edition), respectively. Cognitive ability, parent involvement in children's intervention or education programs, and family reading habits were also evaluated. Language outcomes were examined by using linear regression analyses. The influence of elements of parenting style, child characteristics, and family background as predictors of outcomes were examined. Children using bilateral CIs achieved significantly better vocabulary outcomes and significantly higher scores on the Core and Expressive Language subscales of the Clinical Evaluation of Language Fundamentals (fourth edition) than did comparable children with unilateral CIs. Scores on the Preschool Language Scales (fourth edition) did not differ significantly between children with unilateral and bilateral CIs. Bilateral CI use was found to predict significantly faster rates of vocabulary and language development than unilateral CI use; the magnitude of this effect was moderated by child age at activation of the bilateral CI. In terms of parenting style, high levels of parental involvement, low amounts of screen time, and more time spent

  1. Spared and Impaired Spoken Discourse Processing in Schizophrenia: Effects of Local and Global Language Context

    PubMed Central

    Boudewyn, Megan A.; Long, Debra L.; Luck, Steve J.; Kring, Ann M.; Ragland, J. Daniel; Ranganath, Charan; Lesh, Tyler; Niendam, Tara; Solomon, Marjorie; Mangun, George R.; Carter, Cameron S.

    2013-01-01

    Individuals with schizophrenia are impaired in a broad range of cognitive functions, including impairments in the controlled maintenance of context-relevant information. In this study, we used ERPs in human subjects to examine whether impairments in the controlled maintenance of spoken discourse context in schizophrenia lead to overreliance on local associations among the meanings of individual words. Healthy controls (n = 22) and patients (n = 22) listened to short stories in which we manipulated global discourse congruence and local priming. The target word in the last sentence of each story was globally congruent or incongruent and locally associated or unassociated. ERP local association effects did not significantly differ between control participants and schizophrenia patients. However, in contrast to controls, patients only showed effects of discourse congruence when targets were primed by a word in the local context. When patients had to use discourse context in the absence of local priming, they showed impaired brain responses to the target. Our findings indicate that schizophrenia patients are impaired during discourse comprehension when demands on controlled maintenance of context are high. We further found that ERP measures of increased reliance on local priming predicted reduced social functioning, suggesting that alterations in the neural mechanisms underlying discourse comprehension have functional consequences in the illness. PMID:24068824

  2. Spared and impaired spoken discourse processing in schizophrenia: effects of local and global language context.

    PubMed

    Swaab, Tamara Y; Boudewyn, Megan A; Long, Debra L; Luck, Steve J; Kring, Ann M; Ragland, J Daniel; Ranganath, Charan; Lesh, Tyler; Niendam, Tara; Solomon, Marjorie; Mangun, George R; Carter, Cameron S

    2013-09-25

    Individuals with schizophrenia are impaired in a broad range of cognitive functions, including impairments in the controlled maintenance of context-relevant information. In this study, we used ERPs in human subjects to examine whether impairments in the controlled maintenance of spoken discourse context in schizophrenia lead to overreliance on local associations among the meanings of individual words. Healthy controls (n = 22) and patients (n = 22) listened to short stories in which we manipulated global discourse congruence and local priming. The target word in the last sentence of each story was globally congruent or incongruent and locally associated or unassociated. ERP local association effects did not significantly differ between control participants and schizophrenia patients. However, in contrast to controls, patients only showed effects of discourse congruence when targets were primed by a word in the local context. When patients had to use discourse context in the absence of local priming, they showed impaired brain responses to the target. Our findings indicate that schizophrenia patients are impaired during discourse comprehension when demands on controlled maintenance of context are high. We further found that ERP measures of increased reliance on local priming predicted reduced social functioning, suggesting that alterations in the neural mechanisms underlying discourse comprehension have functional consequences in the illness.

  3. Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping

    PubMed Central

    Mirman, Daniel; Chen, Qi; Zhang, Yongsheng; Wang, Ze; Faseyitan, Olufunsho K.; Coslett, H. Branch; Schwartz, Myrna F.

    2015-01-01

    Studies of patients with acquired cognitive deficits following brain damage and studies using contemporary neuroimaging techniques form two distinct streams of research on the neural basis of cognition. In this study, we combine high-quality structural neuroimaging analysis techniques and extensive behavioral assessment of patients with persistent acquired language deficits to study the neural basis of language. Our results reveal two major divisions within the language system – meaning vs. form and recognition vs. production – and their instantiation in the brain. Phonological form deficits are associated with lesions in peri-Sylvian regions, whereas semantic production and recognition deficits are associated with damage to the left anterior temporal lobe and white matter connectivity with frontal cortex, respectively. These findings provide a novel synthesis of traditional and contemporary views of the cognitive and neural architecture of language processing, emphasizing dual-routes for speech processing and convergence of white matter tracts for semantic control and/or integration. PMID:25879574

  4. Motor excitability during visual perception of known and unknown spoken languages.

    PubMed

    Swaminathan, Swathi; MacSweeney, Mairéad; Boyles, Rowan; Waters, Dafydd; Watkins, Kate E; Möttönen, Riikka

    2013-07-01

    It is possible to comprehend speech and discriminate languages by viewing a speaker's articulatory movements. Transcranial magnetic stimulation studies have shown that viewing speech enhances excitability in the articulatory motor cortex. Here, we investigated the specificity of this enhanced motor excitability in native and non-native speakers of English. Both groups were able to discriminate between speech movements related to a known (i.e., English) and unknown (i.e., Hebrew) language. The motor excitability was higher during observation of a known language than an unknown language or non-speech mouth movements, suggesting that motor resonance is enhanced specifically during observation of mouth movements that convey linguistic information. Surprisingly, however, the excitability was equally high during observation of a static face. Moreover, the motor excitability did not differ between native and non-native speakers. These findings suggest that the articulatory motor cortex processes several kinds of visual cues during speech communication.

  5. Measuring social desirability across language and sex: A comparison of Marlowe-Crowne Social Desirability Scale factor structures in English and Mandarin Chinese in Malaysia.

    PubMed

    Kurz, A Solomon; Drescher, Christopher F; Chin, Eu Gene; Johnson, Laura R

    2016-06-01

    Malaysia is a Southeast Asian country in which multiple languages are prominently spoken, including English and Mandarin Chinese. As psychological science continues to develop within Malaysia, there is a need for psychometrically sound instruments that measure psychological phenomena in multiple languages. For example, assessment tools for measuring social desirability could be a useful addition in psychological assessments and research studies in a Malaysian context. This study examined the psychometric performance of the English and Mandarin Chinese versions of the Marlowe-Crowne Social Desirability Scale when used in Malaysia. Two hundred and eighty-three students (64% female; 83% Chinese, 9% Indian) from two college campuses completed the Marlowe-Crowne Social Desirability Scale in their language of choice (i.e., English or Mandarin Chinese). Proposed factor structures were compared with confirmatory factor analysis, and multiple indicators-multiple causes models were used to examine measurement invariance across language and sex. Factor analyses supported a two-factor structure (i.e., Attribution and Denial) for the measure. Invariance tests revealed the scale was invariant by sex, indicating that social desirability can be interpreted similarly across sex. The scale was partially invariant by language version, with some non-invariance observed within the Denial factor. Non-invariance may be related to differences in the English and Mandarin Chinese languages, as well as cultural differences. Directions for further research include examining the measurement of social desirability in other contexts where both English and Mandarin Chinese are spoken (i.e., China) and further examining the causes of non-invariance on specific items. © 2016 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  6. INTRODUCTION TO SPOKEN VIETNAMESE.

    ERIC Educational Resources Information Center

    JONES, ROBERT B.; THONG, HUYNH S.

    THIS TEXT WAS BASED ON THE VIETNAMESE LANGUAGE AS SPOKEN BY EDUCATED PEOPLE OF SAIGON. FOR RESIDENTS OF SAIGON IT MAY BE CONSIDERED AS CORRECT. FOR THE USE OF OTHER THAN SAIGON RESIDENTS, SOME CHANGES WILL BE REQUIRED, PARTICULARLY IN PART I (PRONUNCIATION) AND PART II (LESSONS 1-6) WHICH HAVE BEEN PRESENTED IN A PHONEMIC TRANSCRIPTION TAILORED…

  7. Auditory Cortical Activity During Cochlear Implant-Mediated Perception of Spoken Language, Melody, and Rhythm

    PubMed Central

    Molloy, Anne T.; Jiradejvong, Patpong; Braun, Allen R.

    2009-01-01

    Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H215O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study. PMID:19662456

  8. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    PubMed

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  9. Oracle Bones and Mandarin Tones: Demystifying the Chinese Language.

    ERIC Educational Resources Information Center

    Michigan Univ., Ann Arbor. Project on East Asian Studies in Education.

    This publication will provide secondary level students with a basic understanding of the development and structure of Chinese (guo yu) language characters. The authors believe that demystifying the language helps break many cultural barriers. The written language is a good place to begin because its pictographic nature is appealing and inspires…

  10. Language Learning Strategies and English Proficiency of Chinese University Students

    ERIC Educational Resources Information Center

    Nisbet, Deanna L.; Tindall, Evie R.; Arroyo, Alan A.

    2005-01-01

    This study investigated the relationship between language learning strategy (LLS) preferences and English proficiency among Chinese university students. Oxford's (1990), Strategy Inventory for Language Learning (SILL) and an institutional version (ITP) of the Test of English as a Foreign Language (TOEFL) were administered to 168 third-year English…

  11. Oracle Bones and Mandarin Tones: Demystifying the Chinese Language.

    ERIC Educational Resources Information Center

    Michigan Univ., Ann Arbor. Project on East Asian Studies in Education.

    This publication will provide secondary level students with a basic understanding of the development and structure of Chinese (guo yu) language characters. The authors believe that demystifying the language helps break many cultural barriers. The written language is a good place to begin because its pictographic nature is appealing and inspires…

  12. Comparing Local and International Chinese Students' English Language Learning Strategies

    ERIC Educational Resources Information Center

    Anthony, Margreat Aloysious; Ganesen, Sree Nithya

    2012-01-01

    According to Horwitz (1987) learners' belief about language learning are influenced by previous language learning experiences as well as cultural background. This study examined the English Language Learning Strategies between local and international Chinese students who share the same cultural background but have been exposed to different…

  13. Educating Teachers of "Chinese as a Local/Global Language": Teaching "Chinese with Australian Characteristics"

    ERIC Educational Resources Information Center

    Singh, Michael; Han, Jinghe

    2014-01-01

    How can the education of teacher-researchers from China be framed in ways so that they might make Chinese learnable for primary and secondary school learners for whom English is their everyday language of instruction and communication. The concept "making Chinese learnable" and the characters of the language learners are explained in the…

  14. Semantic Radical Knowledge and Word Recognition in Chinese for Chinese as Foreign Language Learners

    ERIC Educational Resources Information Center

    Su, Xiaoxiang; Kim, Young-Suk

    2014-01-01

    In the present study, we examined the relation of knowledge of semantic radicals to students' language proficiency and word reading for adult Chinese-as-a-foreign language students. Ninety-seven college students rated their proficiency in speaking, listening, reading, and writing in Chinese, and were administered measures of receptive and…

  15. Educating Teachers of "Chinese as a Local/Global Language": Teaching "Chinese with Australian Characteristics"

    ERIC Educational Resources Information Center

    Singh, Michael; Han, Jinghe

    2014-01-01

    How can the education of teacher-researchers from China be framed in ways so that they might make Chinese learnable for primary and secondary school learners for whom English is their everyday language of instruction and communication. The concept "making Chinese learnable" and the characters of the language learners are explained in the…

  16. Semantic Radical Knowledge and Word Recognition in Chinese for Chinese as Foreign Language Learners

    ERIC Educational Resources Information Center

    Su, Xiaoxiang; Kim, Young-Suk

    2014-01-01

    In the present study, we examined the relation of knowledge of semantic radicals to students' language proficiency and word reading for adult Chinese-as-a-foreign language students. Ninety-seven college students rated their proficiency in speaking, listening, reading, and writing in Chinese, and were administered measures of receptive and…

  17. Lévy-like diffusion in eye movements during spoken-language comprehension

    NASA Astrophysics Data System (ADS)

    Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.

    2009-05-01

    This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.

  18. Language Skills in Classical Chinese Text Comprehension.

    PubMed

    Lau, Kit-Ling

    2017-09-06

    This study used both quantitative and qualitative methods to explore the role of lower- and higher-level language skills in classical Chinese (CC) text comprehension. A CC word and sentence translation test, text comprehension test, and questionnaire were administered to 393 Secondary Four students; and 12 of these were randomly selected to participate in retrospective interviews. The findings revealed that students' CC reading performance was unsatisfactory with respect to both lower- and text-level comprehension. Among the different factors examined, the most crucial to CC reading comprehension was lower-level reading skill. Owing to students' weak lower-level reading skills, participants relied heavily on contextual clues when reading in CC. The implications of these findings for understanding factors that contribute to CC reading comprehension, and for planning effective instruction to enhance students' CC reading competence are discussed.

  19. Interactive Patterns in Paired Discussions between Chinese Heritage and Chinese Foreign Language Learners

    ERIC Educational Resources Information Center

    Huang, Yi-Tzu

    2013-01-01

    Having acquired some degree of oral proficiency but low (or non-existent) literacy, the learning of Chinese heritage learners' (CHLs) learning needs are different from those of Chinese foreign language learners (CFLs), who have learned Chinese only in the classroom setting. Although researchers have advocated for a separate curriculum for…

  20. Language-universal sensory deficits in developmental dyslexia: English, Spanish, and Chinese.

    PubMed

    Goswami, Usha; Wang, H-L Sharon; Cruz, Alicia; Fosker, Tim; Mead, Natasha; Huss, Martina

    2011-02-01

    Studies in sensory neuroscience reveal the critical importance of accurate sensory perception for cognitive development. There is considerable debate concerning the possible sensory correlates of phonological processing, the primary cognitive risk factor for developmental dyslexia. Across languages, children with dyslexia have a specific difficulty with the neural representation of the phonological structure of speech. The identification of a robust sensory marker of phonological difficulties would enable early identification of risk for developmental dyslexia and early targeted intervention. Here, we explore whether phonological processing difficulties are associated with difficulties in processing acoustic cues to speech rhythm. Speech rhythm is used across languages by infants to segment the speech stream into words and syllables. Early difficulties in perceiving auditory sensory cues to speech rhythm and prosody could lead developmentally to impairments in phonology. We compared matched samples of children with and without dyslexia, learning three very different spoken and written languages, English, Spanish, and Chinese. The key sensory cue measured was rate of onset of the amplitude envelope (rise time), known to be critical for the rhythmic timing of speech. Despite phonological and orthographic differences, for each language, rise time sensitivity was a significant predictor of phonological awareness, and rise time was the only consistent predictor of reading acquisition. The data support a language-universal theory of the neural basis of developmental dyslexia on the basis of rhythmic perception and syllable segmentation. They also suggest that novel remediation strategies on the basis of rhythm and music may offer benefits for phonological and linguistic development.

  1. Bilingual signed and spoken language acquisition from birth: implications for the mechanisms underlying early bilingual language acquisition.

    PubMed

    Petitto, L A; Katerelos, M; Levy, B G; Gauna, K; Tétreault, K; Ferraro, V

    2001-06-01

    Divergent hypotheses exist concerning the types of knowledge underlying early bilingualism, with some portraying a troubled course marred by language delays and confusion, and others portraying one that is largely unremarkable. We studied the extraordinary case of bilingual acquisition across two modalities to examine these hypotheses. Three children acquiring Langues des Signes Québécoise and French, and three children acquiring French and English (ages at onset approximately 1;0, 2;6 and 3;6 per group) were videotaped regularly over one year while we empirically manipulated novel and familiar speakers of each child's two languages. The results revealed that both groups achieved their early linguistic milestones in each of their languages at the same time (and similarly to monolinguals), produced a substantial number of semantically corresponding words in each of their two languages from their very first words or signs (translation equivalents), and demonstrated sensitivity to the interlocutor's language by altering their language choices. Children did mix their languages to varying degrees, and some persisted in using a language that was not the primary language of the addressee, but the propensity to do both was directly related to their parents' mixing rates, in combination with their own developing language preference. The signing-speaking bilinguals did exploit the modality possibilities, and they did simultaneously mix their signs and speech, but in semantically principled and highly constrained ways. It is concluded that the capacity to differentiate between two languages is well in place prior to first words, and it is hypothesized that this capacity may result from biological mechanisms that permit the discovery of early phonological representations. Reasons why paradoxical views of bilingual acquisition have persisted are also offered.

  2. Home Literacy Environment and Its Influence on Singaporean Children's Chinese Oral and Written Language Abilities

    ERIC Educational Resources Information Center

    Li, Li; Tan, Chee Lay

    2016-01-01

    In a bilingual environment such as Singaporean Chinese community, the challenge of maintaining Chinese language and sustaining Chinese culture lies in promoting the daily use of Chinese language in oral and written forms among children. Ample evidence showed the effect of the home language and literacy environment (HLE), on children's language and…

  3. Home Literacy Environment and Its Influence on Singaporean Children's Chinese Oral and Written Language Abilities

    ERIC Educational Resources Information Center

    Li, Li; Tan, Chee Lay

    2016-01-01

    In a bilingual environment such as Singaporean Chinese community, the challenge of maintaining Chinese language and sustaining Chinese culture lies in promoting the daily use of Chinese language in oral and written forms among children. Ample evidence showed the effect of the home language and literacy environment (HLE), on children's language and…

  4. Introduction to Spoken Telugu. Program in Oriental Languages. Publication Series B--Aids--Number 18.

    ERIC Educational Resources Information Center

    Lisker, Leigh

    This is a self-teaching textbook for learning Telugu, designed to be used with a language guide or a native speaker of Telugu. The emphasis is placed on speaking rather than on reading or writing. The book opens with a phonetic preface, to introduce the student to the sounds of Telugu. The textbook itself consists of thirty lessons, each divided…

  5. Changes to English as an Additional Language Writers' Research Articles: From Spoken to Written Register

    ERIC Educational Resources Information Center

    Koyalan, Aylin; Mumford, Simon

    2011-01-01

    The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…

  6. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    ERIC Educational Resources Information Center

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  7. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    ERIC Educational Resources Information Center

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  8. Cross-Sensory Correspondences and Symbolism in Spoken and Written Language

    ERIC Educational Resources Information Center

    Walker, Peter

    2016-01-01

    Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…

  9. The Interplay between Spoken Language and Informal Definitions of Statistical Concepts

    ERIC Educational Resources Information Center

    Lavy, Ilana; Mashiach-Eizenberg, Michal

    2009-01-01

    Various terms are used to describe mathematical concepts, in general, and statistical concepts, in particular. Regarding statistical concepts in the Hebrew language, some of these terms have the same meaning both in their everyday use and in mathematics, such as Mode; some of them have a different meaning, such as Expected value and Life…

  10. The Influence of Spoken Language Patterns on the Writing of Black College Freshmen.

    ERIC Educational Resources Information Center

    Scott, Jerrie Cobb

    A study explored the relationship between oral and written patterns produced by a group of black college freshmen enrolled in remedial writing classes. Forty students were asked to produce, in formal language style, both oral and written summaries of a reading selection. The data were analyzed to determine (1) the extent to which patterns,…

  11. Spoken Interaction in Online and Face-to-Face Language Tutorials

    ERIC Educational Resources Information Center

    Heins, Barbara; Duensing, Annette; Stickler, Ursula; Batstone, Carolyn

    2007-01-01

    While interaction in online language learning in the area of written computer-mediated communication is well researched, studies focusing on interaction in synchronous online audio environments remain scarce. For this reason, this paper seeks to map the nature and level of interpersonal interaction in both online and the face-to-face language…

  12. Cross-Sensory Correspondences and Symbolism in Spoken and Written Language

    ERIC Educational Resources Information Center

    Walker, Peter

    2016-01-01

    Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight,…

  13. Neural Processing of Spoken Words in Specific Language Impairment and Dyslexia

    ERIC Educational Resources Information Center

    Helenius, Paivi; Parviainen, Tiina; Paetau, Ritva; Salmelin, Riitta

    2009-01-01

    Young adults with a history of specific language impairment (SLI) differ from reading-impaired (dyslexic) individuals in terms of limited vocabulary and poor verbal short-term memory. Phonological short-term memory has been shown to play a significant role in learning new words. We investigated the neural signatures of auditory word recognition…

  14. Automatic detection of Parkinson's disease in running speech spoken in three different languages.

    PubMed

    Orozco-Arroyave, J R; Hönig, F; Arias-Londoño, J D; Vargas-Bonilla, J F; Daqrouq, K; Skodda, S; Rusz, J; Nöth, E

    2016-01-01

    The aim of this study is the analysis of continuous speech signals of people with Parkinson's disease (PD) considering recordings in different languages (Spanish, German, and Czech). A method for the characterization of the speech signals, based on the automatic segmentation of utterances into voiced and unvoiced frames, is addressed here. The energy content of the unvoiced sounds is modeled using 12 Mel-frequency cepstral coefficients and 25 bands scaled according to the Bark scale. Four speech tasks comprising isolated words, rapid repetition of the syllables /pa/-/ta/-/ka/, sentences, and read texts are evaluated. The method proves to be more accurate than classical approaches in the automatic classification of speech of people with PD and healthy controls. The accuracies range from 85% to 99% depending on the language and the speech task. Cross-language experiments are also performed confirming the robustness and generalization capability of the method, with accuracies ranging from 60% to 99%. This work comprises a step forward for the development of computer aided tools for the automatic assessment of dysarthric speech signals in multiple languages.

  15. Changes to English as an Additional Language Writers' Research Articles: From Spoken to Written Register

    ERIC Educational Resources Information Center

    Koyalan, Aylin; Mumford, Simon

    2011-01-01

    The process of writing journal articles is increasingly being seen as a collaborative process, especially where the authors are English as an Additional Language (EAL) academics. This study examines the changes made in terms of register to EAL writers' journal articles by a native-speaker writing centre advisor at a private university in Turkey.…

  16. Introduction to Spoken Telugu. Program in Oriental Languages. Publication Series B--Aids--Number 18.

    ERIC Educational Resources Information Center

    Lisker, Leigh

    This is a self-teaching textbook for learning Telugu, designed to be used with a language guide or a native speaker of Telugu. The emphasis is placed on speaking rather than on reading or writing. The book opens with a phonetic preface, to introduce the student to the sounds of Telugu. The textbook itself consists of thirty lessons, each divided…

  17. The Interplay between Spoken Language and Informal Definitions of Statistical Concepts

    ERIC Educational Resources Information Center

    Lavy, Ilana; Mashiach-Eizenberg, Michal

    2009-01-01

    Various terms are used to describe mathematical concepts, in general, and statistical concepts, in particular. Regarding statistical concepts in the Hebrew language, some of these terms have the same meaning both in their everyday use and in mathematics, such as Mode; some of them have a different meaning, such as Expected value and Life…

  18. Spoken Interaction in Online and Face-to-Face Language Tutorials

    ERIC Educational Resources Information Center

    Heins, Barbara; Duensing, Annette; Stickler, Ursula; Batstone, Carolyn

    2007-01-01

    While interaction in online language learning in the area of written computer-mediated communication is well researched, studies focusing on interaction in synchronous online audio environments remain scarce. For this reason, this paper seeks to map the nature and level of interpersonal interaction in both online and the face-to-face language…

  19. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    PubMed Central

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  20. Primary Spoken Language and Neuraxial Labor Analgesia Use Among Hispanic Medicaid Recipients.

    PubMed

    Toledo, Paloma; Eosakul, Stanley T; Grobman, William A; Feinglass, Joe; Hasnain-Wynia, Romana

    2016-01-01

    Hispanic women are less likely than non-Hispanic Caucasian women to use neuraxial labor analgesia. It is unknown whether there is a disparity in anticipated or actual use of neuraxial labor analgesia among Hispanic women based on primary language (English versus Spanish). In this 3-year retrospective, single-institution, cross-sectional study, we extracted electronic medical record data on Hispanic nulliparous with vaginal deliveries who were insured by Medicaid. On admission, patients self-identified their primary language and anticipated analgesic use for labor. Extracted data included age, marital status, labor type, delivery provider (obstetrician or midwife), and anticipated and actual analgesic use. Household income was estimated from census data geocoded by zip code. Multivariable logistic regression models were estimated for anticipated and actual neuraxial analgesia use. Among 932 Hispanic women, 182 were self-identified as primary Spanish speakers. Spanish-speaking Hispanic women were less likely to anticipate and use neuraxial anesthesia than English-speaking women. After controlling for confounders, there was an association between primary language and anticipated neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women, 0.70; 97.5% confidence interval, 0.53-0.92). Similarly, there was an association between language and neuraxial analgesia use (adjusted relative risk: Spanish- versus English-speaking women 0.88; 97.5% confidence interval, 0.78-0.99). The use of a midwife compared with an obstetrician also decreased the likelihood of both anticipating and using neuraxial analgesia. A language-based disparity was found in neuraxial labor analgesia use. It is possible that there are communication barriers in knowledge or understanding of analgesic options. Further research is necessary to determine the cause of this association.

  1. Comprehension of spoken language in non-speaking children with severe cerebral palsy: an explorative study on associations with motor type and disabilities.

    PubMed

    Geytenbeek, Joke J M; Vermeulen, R Jeroen; Becher, Jules G; Oostrom, Kim J

    2015-03-01

    To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic (46%) CP (Gross Motor Function Classification System [GMFCS] levels IV [39%] and V [61%]) underwent spoken language comprehension assessment with the computer-based instrument for low motor language testing (C-BiLLT), a new and validated diagnostic instrument. A multiple linear regression model was used to investigate which variables explained the variation in C-BiLLT scores. Associations between spoken language comprehension abilities (expressed in z-score or age-equivalent score) and motor type of CP, GMFCS and Manual Ability Classification System (MACS) levels, gestational age, and epilepsy were analysed with Fisher's exact test. A p-value <0.05 was considered statistically significant. Chronological age, motor type, and GMFCS classification explained 33% (R=0.577, R(2) =0.33) of the variance in spoken language comprehension. Of the children aged younger than 6 years 6 months, 52.4% of the children with dyskinetic CP attained comprehension scores within the average range (z-score ≥-1.6) as opposed to none of the children with spastic CP. Of the children aged older than 6 years 6 months, 32% of the children with dyskinetic CP reached the highest achievable age-equivalent score compared to 4% of the children with spastic CP. No significant difference in disability was found between CP-related variables (MACS levels, gestational age, epilepsy), with the exception of GMFCS which showed a significant difference in children aged younger than 6 years 6 months (p=0.043). Despite communication disabilities in children with severe CP, particularly in dyskinetic CP, spoken language comprehension may show no or only moderate delay. These findings emphasize the importance of introducing

  2. Distinguishing Features in Scoring L2 Chinese Speaking Performance: How Do They Work?

    ERIC Educational Resources Information Center

    Jin, Tan; Mak, Barley

    2013-01-01

    For Chinese as a second language (L2 Chinese), there has been little research into "distinguishing features" (Fulcher, 1996; Iwashita et al., 2008) used in scoring L2 Chinese speaking performance. The study reported here investigates the relationship between the distinguishing features of L2 Chinese spoken performances and the scores…

  3. Distinguishing Features in Scoring L2 Chinese Speaking Performance: How Do They Work?

    ERIC Educational Resources Information Center

    Jin, Tan; Mak, Barley

    2013-01-01

    For Chinese as a second language (L2 Chinese), there has been little research into "distinguishing features" (Fulcher, 1996; Iwashita et al., 2008) used in scoring L2 Chinese speaking performance. The study reported here investigates the relationship between the distinguishing features of L2 Chinese spoken performances and the scores…

  4. Predictors of Early Reading Skill in 5-Year-Old Children With Hearing Loss Who Use Spoken Language.

    PubMed

    Cupples, Linda; Ching, Teresa Y C; Crowe, Kathryn; Day, Julia; Seeto, Mark

    2014-01-01

    This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 5-year-old children with prelingual hearing losses ranging from mild to profound who communicated primarily using spoken language. All participants were fitted with hearing aids (n = 71) or cochlear implants (n = 30). They completed standardized assessments of PA, receptive vocabulary, letter knowledge, word and non-word reading, passage comprehension, math reasoning, and nonverbal cognitive ability. Multiple regressions revealed that PA (assessed using judgments of similarity based on words' initial or final sounds) made a significant, independent contribution to children's early reading ability (for both letters and words/non-words) after controlling for variation in receptive vocabulary, nonverbal cognitive ability, and a range of demographic variables (including gender, degree of hearing loss, communication mode, type of sensory device, age at fitting of sensory devices, and level of maternal education). Importantly, the relationship between PA and reading was specific to reading and did not generalize to another academic ability, math reasoning. Additional multiple regressions showed that letter knowledge (names or sounds) was superior in children whose mothers had undertaken post-secondary education, and that better receptive vocabulary was associated with less severe hearing loss, use of a cochlear implant, and earlier age at implant switch-on. Earlier fitting of hearing aids or cochlear implants was not, however, significantly associated with better PA or reading outcomes in this cohort of children, most of whom were fitted with sensory devices before 3 years of age.

  5. Predictors of Early Reading Skill in 5-Year-Old Children With Hearing Loss Who Use Spoken Language

    PubMed Central

    Ching, Teresa Y.C.; Crowe, Kathryn; Day, Julia; Seeto, Mark

    2013-01-01

    This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 5-year-old children with prelingual hearing losses ranging from mild to profound who communicated primarily using spoken language. All participants were fitted with hearing aids (n = 71) or cochlear implants (n = 30). They completed standardized assessments of PA, receptive vocabulary, letter knowledge, word and non-word reading, passage comprehension, math reasoning, and nonverbal cognitive ability. Multiple regressions revealed that PA (assessed using judgments of similarity based on words’ initial or final sounds) made a significant, independent contribution to children’s early reading ability (for both letters and words/non-words) after controlling for variation in receptive vocabulary, nonverbal cognitive ability, and a range of demographic variables (including gender, degree of hearing loss, communication mode, type of sensory device, age at fitting of sensory devices, and level of maternal education). Importantly, the relationship between PA and reading was specific to reading and did not generalize to another academic ability, math reasoning. Additional multiple regressions showed that letter knowledge (names or sounds) was superior in children whose mothers had undertaken post-secondary education, and that better receptive vocabulary was associated with less severe hearing loss, use of a cochlear implant, and earlier age at implant switch-on. Earlier fitting of hearing aids or cochlear implants was not, however, significantly associated with better PA or reading outcomes in this cohort of children, most of whom were fitted with sensory devices before 3 years of age. PMID:24563553

  6. Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension.

    PubMed

    Riordan, Brian; Dye, Melody; Jones, Michael N

    2015-01-01

    Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information - e.g., grammatical gender and number marking - can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants' eye movements were recorded as they listened to simple English declarative (There are the lions.) and interrogative (Where are the lions?) sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2) and in a task using mixed sentence types (Experiment 3). We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing.

  7. Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension

    PubMed Central

    Riordan, Brian; Dye, Melody; Jones, Michael N.

    2015-01-01

    Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information – e.g., grammatical gender and number marking – can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants’ eye movements were recorded as they listened to simple English declarative (There are the lions.) and interrogative (Where are the lions?) sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2) and in a task using mixed sentence types (Experiment 3). We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing. PMID:25999900

  8. Associations between Chinese Language Classroom Environments and Students' Motivation to Learn the Language

    ERIC Educational Resources Information Center

    Chua, Siew Lian; Wong, Angela F. L.; Chen, Der-Thanq

    2009-01-01

    Associations between the nature of Chinese Language Classroom Environments and Singapore secondary school students' motivation to learn the Chinese Language were investigated. A sample of 1,460 secondary three (grade 9) students from 50 express stream (above average academic ability) classes in Singapore government secondary schools was involved…

  9. 75 FR 5767 - All Terrain Vehicle Chinese Language Webinar; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-04

    ... From the Federal Register Online via the Government Publishing Office CONSUMER PRODUCT SAFETY COMMISSION All Terrain Vehicle Chinese Language Webinar; Meeting AGENCY: Consumer Product Safety Commission. ACTION: Notice. The Consumer Product Safety Commission (CPSC) is announcing the following meeting: All...

  10. Things happen: Individuals with high obsessive-compulsive tendencies omit agency in their spoken language.

    PubMed

    Oren, Ela; Friedmann, Naama; Dar, Reuven

    2016-05-01

    The study examined the prediction that obsessive-compulsive tendencies are related to an attenuated sense of agency (SoA). As most explicit agency judgments are likely to reflect also motivation for and expectation of control, we examined agency in sentence production. Reduced agency can be expressed linguistically by omitting the agent or by using grammatical framings that detach the event from the entity that caused it. We examined the use of agentic language of participants with high vs. low scores on a measure of obsessive-compulsive (OC) symptoms, using structured linguistic tasks in which sentences are elicited in a conversation-like setting. As predicted, high OC individuals produced significantly more non-agentic sentences than low OC individuals, using various linguistic strategies. The results suggest that OC tendencies are related to attenuated SoA. We discuss the implications of these findings for explicating the SoA in OCD and the potential contribution of language analysis for understanding psychopathology. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Preparation of Programmed Chinese Language Materials. Final Report.

    ERIC Educational Resources Information Center

    MacDonald, William L.

    A project to convert part of a Chinese language course ("Standard Chinese: A Modular Approach") to the PLATO computerized teaching system is reported. The project involved: (1) transcribing all the first sequence audiotapes for the first six instructional modules (these tapes present the material of each lesson); (2) rewriting the lesson…

  12. Issues in Chinese Language Teaching in Australian Schools

    ERIC Educational Resources Information Center

    Orton, Jane

    2016-01-01

    The teaching of Chinese in Australian primary and secondary schools has a history of more than 40 years, but it has only been in the past two decades that it has become widespread. Nonetheless, until the last year, of the six most taught languages in schools, Chinese has had by far the smallest number of students. Several factors contribute to…

  13. Cross-sensory correspondences and symbolism in spoken and written language.

    PubMed

    Walker, Peter

    2016-09-01

    Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight, sharpness, smallness, speed, and thinness, because higher pitched sounds appear to have these cross-sensory features. Correspondences also support prosodic sound symbolism. For example, speakers might raise the fundamental frequency of their voice to emphasize the smallness of the concept they are naming. The conceptual nature of correspondences and their functional bidirectionality indicate they should also support other types of symbolism, including a visual equivalent of prosodic sound symbolism. For example, the correspondence between auditory pitch and visual thinness predicts that a typeface with relatively thin letter strokes will reinforce a word's reference to a relatively high pitch sound (e.g., squeal). An initial rating study confirms that the thinness-thickness of a typeface's letter strokes accesses the same cross-sensory correspondences observed elsewhere. A series of speeded word classification experiments then confirms that the thinness-thickness of letter strokes can facilitate a reader's comprehension of the pitch of a sound named by a word (thinner letter strokes being appropriate for higher pitch sounds), as can the brightness of the text (e.g., white-on-gray text being appropriate for the names of higher pitch sounds). It is proposed that the elementary visual features of text are represented in the same conceptual system as word meaning, allowing cross-sensory correspondences to support visual symbolism in language. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. The vowel inherent spectral change of English vowels spoken by native and non-native speakers.

    PubMed

    Jin, Su-Hyun; Liu, Chang

    2013-05-01

    The current study examined Vowel Inherent Spectral Change (VISC) of English vowels spoken by English-, Chinese-, and Korean-native speakers. Two metrics, spectral distance (amount of spectral shift) and spectral angle (direction of spectral shift) of formant movement from the onset to the offset, were measured for 12 English monophthongs produced in a /hvd/ context. While Chinese speakers showed significantly greater spectral distances of vowels than English and Korean speakers, there was no significant speakers' native language effect on spectral angles. Comparisons to their native vowels for Chinese and Korean speakers suggest that VISC might be affected by language-specific phonological structure.

  15. Human inferior colliculus activity relates to individual differences in spoken language learning.

    PubMed

    Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M

    2012-03-01

    A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.

  16. Dissimilation in the Second Language Acquisition of Mandarin Chinese Tones

    ERIC Educational Resources Information Center

    Zhang, Hang

    2016-01-01

    This article extends Optimality Theoretic studies to the research on second language tone phonology. Specifically, this work analyses the acquisition of identical tone sequences in Mandarin Chinese by adult speakers of three non-tonal languages: English, Japanese and Korean. This study finds that the learners prefer not to use identical lexical…

  17. Dissimilation in the Second Language Acquisition of Mandarin Chinese Tones

    ERIC Educational Resources Information Center

    Zhang, Hang

    2016-01-01

    This article extends Optimality Theoretic studies to the research on second language tone phonology. Specifically, this work analyses the acquisition of identical tone sequences in Mandarin Chinese by adult speakers of three non-tonal languages: English, Japanese and Korean. This study finds that the learners prefer not to use identical lexical…

  18. Motivational Effects of Standardized Language Assessment on Chinese Young Learners

    ERIC Educational Resources Information Center

    Zhao, Chuqiao

    2016-01-01

    This review paper examines how standardized language assessment affects Chinese young learners' motivation for second-language learning. By presenting the historical and contemporary contexts of the testing system in China, this paper seeks to demonstrate the interrelationship among cultural, social, familial, and individual factors, which…

  19. Languaging in Story Rewriting Tasks by Chinese EFL Students

    ERIC Educational Resources Information Center

    Yang, Luxin

    2016-01-01

    The present study examined the effects of languaging by asking four pairs of Chinese university English as a foreign language (EFL) students to rewrite a story from a different perspective through three stages (composing--comparing- revising). Multiple sources of data were collected, including pair discussions, co-constructed writings, individual…

  20. Languaging in Story Rewriting Tasks by Chinese EFL Students

    ERIC Educational Resources Information Center

    Yang, Luxin

    2016-01-01

    The present study examined the effects of languaging by asking four pairs of Chinese university English as a foreign language (EFL) students to rewrite a story from a different perspective through three stages (composing--comparing- revising). Multiple sources of data were collected, including pair discussions, co-constructed writings, individual…

  1. Implementation of Task-Based Language Teaching in Chinese as a Foreign Language: Benefits and Challenges

    ERIC Educational Resources Information Center

    Bao, Rui; Du, Xiangyun

    2015-01-01

    Task-based language teaching (TBLT) has been drawing increased attention from language teachers and researchers in the past decade. This paper focuses on the effects of TBLT on beginner learners of Chinese as a foreign language (CFL) in Denmark. Participatory observation and semi-structured interviews were carried out with 18 participants from two…

  2. Bringing out Children's Wonderful Ideas in Teaching Chinese as a Foreign Language.

    ERIC Educational Resources Information Center

    Yang, Yi

    This paper describes one after-school program at the Cambridge Chinese School, dedicated to teaching Chinese literacy to Chinese K-12 students in the Boston, Massachusetts area. In 1998, the school initiated the "Chinese as a Foreign Language" program to cater to the needs of U.S. families with an interest in the Chinese language and culture…

  3. Ethnic Contestation and Language Policy in a Plural Society: The Chinese Language Movement in Malaysia, 1952-1967

    ERIC Educational Resources Information Center

    Yao Sua, Tan; Hooi See, Teoh

    2014-01-01

    The Chinese language movement was launched by the Chinese educationists to demand the recognition of Chinese as an official language to legitimise the status of Chinese education in the national education system in Malaysia. It began in 1952 as a response to the British attempt to establish national primary schools teaching in English and Malay to…

  4. Ethnic Contestation and Language Policy in a Plural Society: The Chinese Language Movement in Malaysia, 1952-1967

    ERIC Educational Resources Information Center

    Yao Sua, Tan; Hooi See, Teoh

    2014-01-01

    The Chinese language movement was launched by the Chinese educationists to demand the recognition of Chinese as an official language to legitimise the status of Chinese education in the national education system in Malaysia. It began in 1952 as a response to the British attempt to establish national primary schools teaching in English and Malay to…

  5. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    PubMed Central

    Zhang, Qingfang; Wang, Cheng

    2014-01-01

    The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed. PMID:24600420

  6. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script.

    PubMed

    Zhang, Qingfang; Wang, Cheng

    2014-01-01

    The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed.

  7. Variability in Chinese as a Foreign Language Learners' Development of the Chinese Numeral Classifier System

    ERIC Educational Resources Information Center

    Zhang, Jie; Lu, Xiaofei

    2013-01-01

    This study examined variability in Chinese as a Foreign Language (CFL) learners' development of the Chinese numeral classifier system from a dynamic systems approach. Our data consisted of a longitudinal corpus of 657 essays written by CFL learners at lower and higher intermediate levels and a corpus of 100 essays written by native speakers (NSs)…

  8. Spoken Dutch.

    ERIC Educational Resources Information Center

    Bloomfield, Leonard

    This course in spoken Dutch is intended for use in introductory conversational classes. The book is divided into five major parts, each containing five learning units and one unit devoted to review. Each unit contains sections including (1) basic sentences, (2) word study and review of basic sentences, (3) listening comprehension, and (4)…

  9. Examining the English Language Policy for Ethnic Minority Students in a Chinese University: A Language Ideology and Language Regime Perspective

    ERIC Educational Resources Information Center

    Han, Yawen; De Costa, Peter I.; Cui, Yaqiong

    2016-01-01

    We focus on the learning of English in a Chinese university in Jiangsu and the university's preferential language policy, which allowed Uyghur minority students from Xinjiang to be enrolled despite their lower scores in the entrance examination. Guided by the constructs of language ideologies [Kroskrity, P. V. (2000). "Regimes of language:…

  10. Examining the English Language Policy for Ethnic Minority Students in a Chinese University: A Language Ideology and Language Regime Perspective

    ERIC Educational Resources Information Center

    Han, Yawen; De Costa, Peter I.; Cui, Yaqiong

    2016-01-01

    We focus on the learning of English in a Chinese university in Jiangsu and the university's preferential language policy, which allowed Uyghur minority students from Xinjiang to be enrolled despite their lower scores in the entrance examination. Guided by the constructs of language ideologies [Kroskrity, P. V. (2000). "Regimes of language:…

  11. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    PubMed

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  12. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    PubMed

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  13. Age differences in the use of informative/heuristic communicative functions in young children with and without hearing loss who are learning spoken language.

    PubMed

    Nicholas, J G

    2000-04-01

    Previous research has suggested that the normal development of communicative functions proceeds from the directing or "instrumental" types to the informative or "heuristic" types with age. This paper describes a cross-sectional study of communicative function in children with profound hearing loss and children with normal hearing, from ages 12-54 months. The children with hearing loss were learning spoken English as their primary means of communication. The primary purpose of the study was to evaluate whether the pattern of age differences seen in the two groups of children (those with and without normal hearing) are similar patterns that occur at differing chronological ages, or whether they are dissimilar patterns altogether. A second purpose was to examine the relationship between the use of informative/heuristic functions and the acquisition of vocabulary and syntax. The data suggested a somewhat different pattern of communicative function development in children with and without hearing loss. In addition, the use of language for social purposes was closely related to the achievement of traditional language milestones. In both normally hearing children and in those with hearing loss, the correlations between the use of informative-heuristic functions and various measures of language development indicated that the more mature uses of language co-occur with increased frequency of communication, larger vocabulary, and longer utterance length. These results document that when linguistic improvements such as increasing vocabulary size and sentence length occur in deaf children learning spoken English, they are used for appropriate and informative social purposes that are commensurate with their language age.

  14. How Vocabulary Size in Two Languages Relates to Efficiency in Spoken Word Recognition by Young Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language…

  15. The Equivalence and Difference between the English and Chinese Language Versions of the Repeatable Battery for the Assessment of Neuropsychological Status.

    PubMed

    Phillips, Rachel; Cheung, Yin Bun; Collinson, Simon Lowes; Lim, May-Li; Ling, Audrey; Feng, Lei; Ng, Tze-Pin

    2015-01-01

    Chinese is the most commonly spoken language in the world. The availability of Chinese translations of assessment scales is useful for research in multi-ethnic and multinational studies. This study aimed to establish whether each of the Chinese translations (Mandarin, Hokkien, Teochew, and Cantonese) of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) achieved measurement equivalence to the English version. Participants included 1856 ethnic Chinese, older adults. The RBANS was administered in the language/dialect according to the participants' preference by interviewers who were fluent in that language/dialect. Multiple regression analysis was used to adjust for demographic and clinical differences between participants who spoke different languages/dialects. Equivalence (practical equivalence) was declared if the 90% confidence interval for the adjusted mean difference fell entirely within the pre-specified equivalence margin, ±.2 (±.4) standard deviations. The delayed memory index was at least practically equivalent across languages. The Mandarin, Hokkien, and Teochew versions of the immediate memory, language, and total scale score were practically equivalent to the English version; the Cantonese version showed small differences from the English version. Equivalence was not established for the Hokkien and Teochew versions of the visuospatial/constructional index. The attention index was different across languages. Data from the English and Chinese versions for the total scale score, language, delayed, and immediate memory indexes may be pooled for analysis. However, analysis of the attention and visuospatial/constructional indexes from the English and Chinese versions should include a covariate that represents the version in the statistical adjustment.

  16. Children's Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time.

    PubMed

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2015-01-01

    Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants' first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language.

  17. Children’s Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

    PubMed Central

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P.; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2016-01-01

    Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language. PMID:26834665

  18. The Influence of Chinese Character Handwriting Diagnosis and Remedial Instruction System on Learners of Chinese as a Foreign Language

    ERIC Educational Resources Information Center

    Hsiao, Hsien-Sheng; Chang, Cheng-Sian; Chen, Chiao-Jia; Wu, Chia-Hou; Lin, Chien-Yu

    2015-01-01

    This study designed and developed a Chinese character handwriting diagnosis and remedial instruction (CHDRI) system to improve Chinese as a foreign language (CFL) learners' ability to write Chinese characters. The CFL learners were given two tests based on the CHDRI system. One test focused on Chinese character handwriting to diagnose the CFL…

  19. The Influence of Chinese Character Handwriting Diagnosis and Remedial Instruction System on Learners of Chinese as a Foreign Language

    ERIC Educational Resources Information Center

    Hsiao, Hsien-Sheng; Chang, Cheng-Sian; Chen, Chiao-Jia; Wu, Chia-Hou; Lin, Chien-Yu

    2015-01-01

    This study designed and developed a Chinese character handwriting diagnosis and remedial instruction (CHDRI) system to improve Chinese as a foreign language (CFL) learners' ability to write Chinese characters. The CFL learners were given two tests based on the CHDRI system. One test focused on Chinese character handwriting to diagnose the CFL…

  20. How Does the Linguistic Distance Between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances During Verbal Memory Examination.

    PubMed

    Taha, Haitham

    2017-06-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and phonologically similar version (PS). The result showed that for immediate free-recall, the performances were better for the SL and the PS conditions compared to the SA one. However, for the parts of delayed recall and recognition, the results did not reveal any significant consistent effect of diglossia. Accordingly, it was suggested that diglossia has a significant effect on the storage and short term memory functions but not on long term memory functions. The results were discussed in light of different approaches in the field of bilingual memory.

  1. An Instrument for Investigating Chinese Language Learning Environments in Singapore Secondary Schools

    ERIC Educational Resources Information Center

    Chua, Siew Lian; Wong, Angela F. L.; Chen, Der-Thanq

    2009-01-01

    This paper describes how a new classroom environment instrument, the "Chinese Language Classroom Environment Inventory (CLCEI)", was developed to investigate the nature of Chinese language classroom learning environments in Singapore secondary schools. The CLCEI is a bilingual instrument (English and Chinese Language) with 48 items…

  2. Text Memorisation in Chinese Foreign Language Education

    ERIC Educational Resources Information Center

    Yu, Xia

    2012-01-01

    In China, a widespread learning practice for foreign languages are reading, reciting and memorising texts. This book investigates this practice against a background of Confucian heritage learning and western attitudes towards memorising, particularly audio-lingual approaches to language teaching and later largely negative attitudes. The author…

  3. Text Memorisation in Chinese Foreign Language Education

    ERIC Educational Resources Information Center

    Yu, Xia

    2012-01-01

    In China, a widespread learning practice for foreign languages are reading, reciting and memorising texts. This book investigates this practice against a background of Confucian heritage learning and western attitudes towards memorising, particularly audio-lingual approaches to language teaching and later largely negative attitudes. The author…

  4. Spoken name pronunciation evaluation

    NASA Astrophysics Data System (ADS)

    Tepperman, Joseph; Narayanan, Shrikanth

    2004-10-01

    Recognition of spoken names is an important ASR task since many speech applications can be associated with it. However, the task is also among the most difficult ones due to the large number of names, their varying origins, and the multiple valid pronunciations of any given name, largely dependent upon the speaker's mother tongue and familiarity with the name. In order to explore the speaker- and language-dependent pronunciation variability issues present in name pronunciation, a spoken name database was collected from 101 speakers with varying native languages. Each speaker was asked to pronounce 80 polysyllabic names, uniformly chosen from ten language origins. In preliminary experiments, various prosodic features were used to train Gaussian mixture models (GMMs) to identify misplaced syllabic emphasis within the name, at roughly 85% accuracy. Articulatory features (voicing, place, and manner of articulation) derived from MFCCs were also incorporated for that purpose. The combined prosodic and articulatory features were used to automatically grade the quality of name pronunciation. These scores can be used to provide meaningful feedback to foreign language learners. A detailed description of the name database and some preliminary results on the accuracy of detecting misplaced stress patterns will be reported.

  5. Language spoken at home and the association between ethnicity and doctor–patient communication in primary care: analysis of survey data for South Asian and White British patients

    PubMed Central

    Brodie, Kara; Abel, Gary

    2016-01-01

    Objectives To investigate if language spoken at home mediates the relationship between ethnicity and doctor–patient communication for South Asian and White British patients. Methods We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner–patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. Results There was strong evidence of an association between doctor–patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0–100) than White British patients (95% CI −4.9 to −1.1, p=0.002). This difference reduced to 1.4 points (95% CI −3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI −6.4 to −0.2). Conclusions South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. PMID:26940108

  6. Self-ratings of Spoken Language Dominance: A Multi-Lingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals*

    PubMed Central

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2014-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and named pictures in a Multilingual Naming Test (MINT, and in Experiment 1 also the Boston Naming Test; BNT). Self-ratings, proficiency interview, and the MINT did not differ significantly in classifying bilinguals into language-dominance groups, but naming tests (especially the BNT) classified bilinguals as more English-dominant than other measures. Strong correlations were observed between measures of proficiency in each language and language-dominance, but not degree of balanced bilingualism (index scores). Depending on the measure, up to 60% of bilinguals scored best in their self-reported non-dominant language. The BNT distorted bilingual assessment by underestimating ability in Spanish. These results illustrate what self-ratings can and cannot provide, illustrate the pitfalls of testing bilinguals with measures designed for monolinguals, and invite a multi-measure goal driven approach to classifying bilinguals into dominance groups. PMID:25364296

  7. Language spoken at home and the association between ethnicity and doctor-patient communication in primary care: analysis of survey data for South Asian and White British patients.

    PubMed

    Brodie, Kara; Abel, Gary; Burt, Jenni

    2016-03-03

    To investigate if language spoken at home mediates the relationship between ethnicity and doctor-patient communication for South Asian and White British patients. We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner-patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. There was strong evidence of an association between doctor-patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0-100) than White British patients (95% CI -4.9 to -1.1, p=0.002). This difference reduced to 1.4 points (95% CI -3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI -6.4 to -0.2). South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  8. Self-ratings of Spoken Language Dominance: A Multi-Lingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals.

    PubMed

    Gollan, Tamar H; Weissberger, Gali H; Runnqvist, Elin; Montoya, Rosa I; Cera, Cynthia M

    2012-07-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and named pictures in a Multilingual Naming Test (MINT, and in Experiment 1 also the Boston Naming Test; BNT). Self-ratings, proficiency interview, and the MINT did not differ significantly in classifying bilinguals into language-dominance groups, but naming tests (especially the BNT) classified bilinguals as more English-dominant than other measures. Strong correlations were observed between measures of proficiency in each language and language-dominance, but not degree of balanced bilingualism (index scores). Depending on the measure, up to 60% of bilinguals scored best in their self-reported non-dominant language. The BNT distorted bilingual assessment by underestimating ability in Spanish. These results illustrate what self-ratings can and cannot provide, illustrate the pitfalls of testing bilinguals with measures designed for monolinguals, and invite a multi-measure goal driven approach to classifying bilinguals into dominance groups.

  9. Processing Relative Clauses in Chinese as a Second Language

    ERIC Educational Resources Information Center

    Xu, Yi

    2014-01-01

    This project investigates second language (L2) learners' processing of four types of Chinese relative clauses crossing extraction types and demonstrative-classifier (DCl) positions. Using a word order judgment task with a whole-sentence reading technique, the study also discusses how psycholinguistic theories bear explanatory power in L2 data. An…

  10. Dyslexia in Chinese Language: An Overview of Research and Practice

    ERIC Educational Resources Information Center

    Chung, Kevin K. H.; Ho, Connie S. H.

    2010-01-01

    Dyslexia appears to be the most prevalent disability of students with special educational needs in many mainstream classes, affecting around 9.7% of the school population in Hong Kong. The education of these students is therefore of great concern to the community. In the present paper research into dyslexia in the Chinese language is briefly…

  11. Chinese Language Study Abroad in the Summer, 1990. Final Report.

    ERIC Educational Resources Information Center

    Thompson, Richard T.

    After an analysis of the changing numbers of Americans studying Chinese abroad and of Sino-American academic exchanges after the Tiananmen events of 1989, this paper reports on visits to summer language programs. Enrollments were down by 13 percent between the summer of 1988 and 1989, but down by 50 percent between 1989 and 1990. The following…

  12. Processing Relative Clauses in Chinese as a Second Language

    ERIC Educational Resources Information Center

    Xu, Yi

    2014-01-01

    This project investigates second language (L2) learners' processing of four types of Chinese relative clauses crossing extraction types and demonstrative-classifier (DCl) positions. Using a word order judgment task with a whole-sentence reading technique, the study also discusses how psycholinguistic theories bear explanatory power in L2 data. An…

  13. Dyslexia in Chinese Language: An Overview of Research and Practice

    ERIC Educational Resources Information Center

    Chung, Kevin K. H.; Ho, Connie S. H.

    2010-01-01

    Dyslexia appears to be the most prevalent disability of students with special educational needs in many mainstream classes, affecting around 9.7% of the school population in Hong Kong. The education of these students is therefore of great concern to the community. In the present paper research into dyslexia in the Chinese language is briefly…

  14. LIST OF CHINESE DICTIONARIES IN ALL LANGUAGES. EXTERNAL RESEARCH PAPER.

    ERIC Educational Resources Information Center

    Department of State, Washington, DC.

    A COMPILATION FROM LISTS OF DICTIONARIES USED BY SEVERAL U.S. GOVERNMENT ORGANIZATIONS, THIS DOCUMENT INCLUDES THE TITLES OF AND INFORMATION CONCERNING DICTIONARIES COVERING OVER 25 TOPICS IN THE SCIENTIFIC AND TECHNICAL FIELDS, AND NUMEROUS AREAS OF ECONOMICS, AND POLITICAL, AND SOCIOLOGICAL TOPICS. MANY CHINESE-FOREIGN LANGUAGE DICTIONARIES ARE…

  15. LIST OF CHINESE DICTIONARIES IN ALL LANGUAGES. EXTERNAL RESEARCH PAPER.

    ERIC Educational Resources Information Center

    Department of State, Washington, DC.

    A COMPILATION FROM LISTS OF DICTIONARIES USED BY SEVERAL U.S. GOVERNMENT ORGANIZATIONS, THIS DOCUMENT INCLUDES THE TITLES OF AND INFORMATION CONCERNING DICTIONARIES COVERING OVER 25 TOPICS IN THE SCIENTIFIC AND TECHNICAL FIELDS, AND NUMEROUS AREAS OF ECONOMICS, AND POLITICAL, AND SOCIOLOGICAL TOPICS. MANY CHINESE-FOREIGN LANGUAGE DICTIONARIES ARE…

  16. Invisible and Visible Language Planning: Ideological Factors in the Family Language Policy of Chinese Immigrant Families in Quebec

    ERIC Educational Resources Information Center

    Curdt-Christiansen, Xiao Lan

    2009-01-01

    This ethnographic inquiry examines how family languages policies are planned and developed in ten Chinese immigrant families in Quebec, Canada, with regard to their children's language and literacy education in three languages, Chinese, English, and French. The focus is on how multilingualism is perceived and valued, and how these three languages…

  17. Invisible and Visible Language Planning: Ideological Factors in the Family Language Policy of Chinese Immigrant Families in Quebec

    ERIC Educational Resources Information Center

    Curdt-Christiansen, Xiao Lan

    2009-01-01

    This ethnographic inquiry examines how family languages policies are planned and developed in ten Chinese immigrant families in Quebec, Canada, with regard to their children's language and literacy education in three languages, Chinese, English, and French. The focus is on how multilingualism is perceived and valued, and how these three languages…

  18. Neuropsychological and cognitive effects of Chinese language instruction.

    PubMed

    Hsieh, S L; Tori, C D

    1993-12-01

    Since little is known concerning the cortical and cognitive effects resulting from the study of a tonal, logographic language which is read from top to bottom and in a right-to-left direction, neuropsychological and intellectual measures on 51 bilingual and 41 monolingual Chinese-American children aged 9 to 12 years were compared. Multivariate statistical analyses yielded significant group differences. As predicted, the over-all mean IQ of bilinguals was higher than that of monolinguals. Other than coding skills, expected superior performance of bilinguals on right-hemisphere or bilaterally sensitive measures was not found; rather, the left-dominant sequential abilities of bilinguals surpassed those of their monolingual peers. English fluency of the two groups was equivalent and monolingual youngsters obtained higher scores on two of five academic achievements tests than bilinguals. The effects of Chinese language instruction among Chinese-American children are discussed in relation to the distinguished educational accomplishments of Asian Americans.

  19. Engaging a "Truly Foreign" Language and Culture: China through Chinese Film

    ERIC Educational Resources Information Center

    Ning, Cynthia

    2009-01-01

    In this article, the author shares how she uses Chinese film in her Chinese language and culture classes. She demonstrates how Chinese films can help students "navigate the uncharted universe of Chinese culture" with reference to several contemporary Chinese films. She describes how intensive viewing of films can develop a deeper and…

  20. Reader for Advanced Spoken Tamil. Final Report.

    ERIC Educational Resources Information Center

    Schiffman, Harold

    This final report describes the development of a textbook for advanced, spoken Tamil. There is a marked dirrerence between literary Tamil and spoken Tamil, and training in the former is not sufficient for speaking the language in everyday situations with reasonably educated native speakers. There is difficulty in finding suitable material that…

  1. Learning a Tonal Language by Attending to the Tone: An in Vivo Experiment

    ERIC Educational Resources Information Center

    Liu, Ying; Wang, Min; Perfetti, Charles A.; Brubaker, Brian; Wu, Sumei; MacWhinney, Brian

    2011-01-01

    Learning the Chinese tone system is a major challenge to students of Chinese as a second or foreign language. Part of the problem is that the spoken Chinese syllable presents a complex perceptual input that overlaps tone with segments. This complexity can be addressed through directing attention to the critical features of a component (tone in…

  2. Aspects of Authentic Spoken German: Awareness and Recognition of Elision in the German Classroom

    ERIC Educational Resources Information Center

    Lightfoot, Douglas

    2016-01-01

    This work discusses the importance of spoken German in classroom instruction. The paper examines the nature of natural spoken language as opposed to written language. We find a general consensus that the prevailing language measure (whether pertaining to written or spoken language) in instructional settings more often typifies the rules associated…

  3. Aspects of Authentic Spoken German: Awareness and Recognition of Elision in the German Classroom

    ERIC Educational Resources Information Center

    Lightfoot, Douglas

    2016-01-01

    This work discusses the importance of spoken German in classroom instruction. The paper examines the nature of natural spoken language as opposed to written language. We find a general consensus that the prevailing language measure (whether pertaining to written or spoken language) in instructional settings more often typifies the rules associated…

  4. Adult Chinese as a Second Language Learners' Willingness to Communicate in Chinese: Effects of Cultural, Affective, and Linguistic Variables.

    PubMed

    Liu, Meihua

    2017-06-01

    The present research explored the effects of cultural, affective, and linguistic variables on adult Chinese as a second language learners' willingness to communicate in Chinese. One hundred and sixty-two Chinese as a second language learners from a Chinese university answered the Willingness to Communicate in Chinese Scale, the Intercultural Sensitivity Scale, Chinese Speaking Anxiety Scale, Chinese Learning Motivation Scale, Use of Chinese Profile, as well as the Background Questionnaire. The major findings were as follows: (1) the Willingness to Communicate in Chinese Scales were significantly negatively correlated with Chinese Speaking Anxiety Scale but positively correlated with length of stay in China and (2) Chinese Speaking Anxiety Scale was a powerful negative predictor for the overall willingness to communicate in Chinese and the Willingness to Communicate in Chinese Scales, followed by length of stay in China, Chinese Learning Motivation Scale, interaction attentiveness, and Chinese proficiency level. Apparently, students' willingness to communicate in Chinese is largely determined by their Chinese Speaking Anxiety Scale level and length of stay in China, mediated by other variables such as Chinese proficiency level and intercultural communication sensitivity level.

  5. ON THE LITERARY LANGUAGE IN CHINA AND JAPAN. PRELIMINARY TRANSLATIONS OF SELECTED WORKS IN SOCIOLINGUISTICS, NUMBER V.

    ERIC Educational Resources Information Center

    KONRAD, N.I.

    ORIENTALISTS HAVE OBSERVED THE DEVELOPMENT OF THE NATIONAL "STANDARD" LANGUAGES OF CHINA AND JAPAN AS A GRADUAL REPLACEMENT OF THE OLD "WRITTEN-LITERARY" LANGUAGE BY THE "COLLOQUIAL" SPOKEN LANGUAGE. THE AUTHOR DEFINES "WRITTEN-LITERARY" LANGUAGE, CORRESPONDING TO "WEN-YEN" IN CHINESE AND…

  6. The Role of Foreign Domestic Helpers in Hong Kong Chinese Children's English and Chinese Skills: A Longitudinal Study

    ERIC Educational Resources Information Center

    Dulay, Katrina May; Tong, Xiuhong; McBride, Catherine

    2017-01-01

    We investigated the influence of nonparental caregivers, such as foreign domestic helpers (FDH), on the home language spoken to the child and its implications for vocabulary and word reading development in Cantonese- and English-speaking bilingual children. Using data collected from ages 5 to 9, we analyzed Chinese vocabulary, Chinese character…

  7. Asia Society's Ongoing Chinese Language Initiatives

    ERIC Educational Resources Information Center

    Livaccari, Chris; Wang, Jeff

    2009-01-01

    Asia Society remains committed to promoting the teaching and learning of Chinese in American schools as an integral part of the broader agenda of building students' global competency, the key goal of its Partnership for Global Learning. Under the leadership of Asia Society's new Vice President for Education Tony Jackson and with continuing…

  8. Chinese Language Video Clips. [CD-ROM].

    ERIC Educational Resources Information Center

    Fleming, Stephen; Hipley, David; Ning, Cynthia

    This compact disc includes video clips covering six topics for the learner of Chinese: personal information, commercial transactions, travel and leisure, health and sports, food and school. Filmed on location in Beijing, these naturalistic video clips consist mainly of unrehearsed interviews of ordinary people. The learner is lead through a series…

  9. Contribution of Spoken Language and Socio-Economic Background to Adolescents' Educational Achievement at Age 16 Years

    ERIC Educational Resources Information Center

    Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert

    2017-01-01

    Background: Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and…

  10. Magnetoencephalography of language: new approaches to understanding the cortical organization of Chinese processing.

    PubMed

    Zhang, Yumei; Zhang, Ning; Han, Zaizhu; Wang, Yilong; Wang, Chunxue; Chen, Hongyan; Wang, Yongjun; Zhang, Xinghu

    2010-07-01

    Chinese is a logographic language. Many of its psycholinguistic characteristics differ from those of alphabetic languages. These differences might be expected to entail a different pattern of neural activity underpinning Chinese language processing compared to the processing of alphabetic languages. The aim of the current study was to investigate neural language centers for processing Chinese language information in healthy Chinese speakers using magnetoencephalography (MEG). Overall, we aimed to elucidate language-specific and language-general characteristics of processing across different language scripts. Ten healthy Chinese-speaking subjects were asked to silently read genuine Chinese characters and view pseudo-characters in a MEG scanner. The functional language areas were located by overlapping the MEG results over magnetic resonance imaging (MRI) images. Distinctive late magnetic response waves were observed in both hemispheres while the subjects were reading genuine Chinese characters. The polarization of the response waveforms was found to be greater in the left than the right hemisphere. Broca's area was found to be located at the back of gyrus frontalis inferior or gyrus frontalis medius. Wernicke's area was located at gyrus temporalis medius, gyrus temporalis superior and gyrus supramariginalis. In addition, Wernicke's area was activated earlier than Broca's area. Native Chinese speakers reading Chinese characters showed neural responses that were lateralized to the left hemisphere. Overall, the functional brain areas activated by reading Chinese in this study corresponded to classical language centers found for alphabetic languages in previous studies, but some differences were also found in the specific patterns of activation.

  11. Scientific psychology within the Chinese language and cultural context.

    PubMed

    Shen, Heyong

    2006-01-01

    The Scientific Psychology that was founded by Wilhelm Wundt appeared in China in the late nineteenth century. The scholars translated the name of psychology into Chinese as Xin-Li-Xue, for which the meaning of the words looks like "heartology," i.e., "the study of the heart." In Chinese, the same core structure related to "heart" (Xin) is found in most of the terms of psychology, such as emotion, thinking, will, forgetting, and memory. By translating Xin as "heart" instead of "mind," we maintain an embodied approach to understanding the "principles of the heart." Through a historical approach to the influence of Western psychology, a cultural analysis of the meaning of the term psychology in Chinese, and a focus on the meeting of Eastern and Western psychology, we can witness the significance of psychology in the Chinese language and cultural context. I will use three parts to present psychology in the Chinese cultural context: the origins of Chinese psychology, from a historical approach; the meaning of "psychology" in Chinese, using a cultural analysis; and the meeting of Eastern and Western psychology, focusing on the development and future.

  12. How appropriate are the English language test requirements for non-UK-trained nurses? A qualitative study of spoken communication in UK hospitals.

    PubMed

    Sedgwick, Carole; Garner, Mark

    2017-06-01

    Non-native speakers of English who hold nursing qualifications from outside the UK are required to provide evidence of English language competence by achieving a minimum overall score of Band 7 on the International English Language Testing System (IELTS) academic test. To describe the English language required to deal with the daily demands of nursing in the UK. To compare these abilities with the stipulated levels on the language test. A tracking study was conducted with 4 nurses, and focus groups with 11 further nurses. The transcripts of the interviews and focus groups were analysed thematically for recurrent themes. These findings were then compared with the requirements of the IELTS spoken test. The study was conducted outside the participants' working shifts in busy London hospitals. The participants in the tracking study were selected opportunistically;all were trained in non-English speaking countries. Snowball sampling was used for the focus groups, of whom 4 were non-native and 7 native speakers of English. In the tracking study, each of the 4 nurses was interviewed on four occasions, outside the workplace, and as close to the end of a shift as possible. They were asked to recount their spoken interactions during the course of their shift. The participants in the focus groups were asked to describe their typical interactions with patients, family members, doctors, and nursing colleagues. They were prompted to recall specific instances of frequently-occurring communication problems. All interactions were audio-recorded, with the participants' permission,and transcribed. Nurses are at the centre of communication for patient care. They have to use appropriate registers to communicate with a range of health professionals, patients and their families. They must elicit information, calm and reassure, instruct, check procedures, ask for and give opinions,agree and disagree. Politeness strategies are needed to avoid threats to face. They participate in medical

  13. Spoken (Yucatec) Maya. [Preliminary Edition].

    ERIC Educational Resources Information Center

    Blair, Robert W.; Vermont-Salas, Refugio

    This two-volume set of 18 tape-recorded lesson units represents a first attempt at preparing a course in the modern spoken language of some 300,000 inhabitants of the peninsula of Yucatan, the Guatemalan department of the Peten, and certain border areas of Belize. (A short account of the research and background of this material is given in the…

  14. Selected Translated Abstracts of Chinese-Language Climate Change Publications

    SciTech Connect

    Cushman, R.M.; Burtis, M.D.

    1999-05-01

    This report contains English-translated abstracts of important Chinese-language literature concerning global climate change for the years 1995-1998. This body of literature includes the topics of adaptation, ancient climate change, climate variation, the East Asia monsoon, historical climate change, impacts, modeling, and radiation and trace-gas emissions. In addition to the biological citations and abstracts translated into English, this report presents the original citations and abstracts in Chinese. Author and title indexes are included to assist the reader in locating abstracts of particular interest.

  15. Distinguish Spoken English from Written English: Rich Feature Analysis

    ERIC Educational Resources Information Center

    Tian, Xiufeng

    2013-01-01

    This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…

  16. Chinese Cultural Education in Post-Colonial Hong Kong: Primary School Chinese Language Teachers' Belief and Practice

    ERIC Educational Resources Information Center

    Kwan, Ming Kai Marko

    2010-01-01

    Before 1997, no formal curriculum on Chinese cultural education for primary schools was developed in Hong Kong although the education authority had started to introduce some items of Chinese cultural learning into the Chinese language syllabus when the Target Oriented Curriculum was implemented in 1996. However, such items were incorporated into…

  17. Chinese Cultural Education in Post-Colonial Hong Kong: Primary School Chinese Language Teachers' Belief and Practice

    ERIC Educational Resources Information Center

    Kwan, Ming Kai Marko

    2010-01-01

    Before 1997, no formal curriculum on Chinese cultural education for primary schools was developed in Hong Kong although the education authority had started to introduce some items of Chinese cultural learning into the Chinese language syllabus when the Target Oriented Curriculum was implemented in 1996. However, such items were incorporated into…

  18. Phonological awareness development in children with and without spoken language difficulties: A 12-month longitudinal study of German-speaking pre-school children.

    PubMed

    Schaefer, Blanca; Stackhouse, Joy; Wells, Bill

    2017-10-01

    There is strong empirical evidence that English-speaking children with spoken language difficulties (SLD) often have phonological awareness (PA) deficits. The aim of this study was to explore longitudinally if this is also true of pre-school children speaking German, a language that makes extensive use of derivational morphemes which may impact on the acquisition of different PA levels. Thirty 4-year-old children with SLD were assessed on 11 PA subtests at three points over a 12-month period and compared with 97 four-year-old typically developing (TD) children. The TD-group had a mean percentage correct of over 50% for the majority of tasks (including phoneme tasks) and their PA skills developed significantly over time. In contrast, the SLD-group improved their PA performance over time on syllable and rhyme, but not on phoneme level tasks. Group comparisons revealed that children with SLD had weaker PA skills, particularly on phoneme level tasks. The study contributes a longitudinal perspective on PA development before school entry. In line with their English-speaking peers, German-speaking children with SLD showed poorer PA skills than TD peers, indicating that the relationship between SLD and PA is similar across these two related but different languages.

  19. The interaction of lexical tone, intonation and semantic context in on-line spoken word recognition: an ERP study on Cantonese Chinese.

    PubMed

    Kung, Carmen; Chwilla, Dorothee J; Schriefers, Herbert

    2014-01-01

    In two ERP experiments, we investigate the on-line interplay of lexical tone, intonation and semantic context during spoken word recognition in Cantonese Chinese. Experiment 1 shows that lexical tone and intonation interact immediately. Words with a low lexical tone at the end of questions (with a rising question intonation) lead to a processing conflict. This is reflected in a low accuracy in lexical identification and in a P600 effect compared to the same words at the end of a statement. Experiment 2 shows that a strongly biasing semantic context leads to much better lexical-identification performance for words with a low tone at the end of questions and to a disappearance of the P600 effect. These results support the claim that semantic context plays a major role in disentangling the tonal information from the intonational information, and thus, in resolving the on-line conflict between intonation and tone. However, the ERP data indicate that the introduction of a semantic context does not entirely eliminate on-line processing problems for words at the end of questions. This is revealed by the presence of an N400 effect for words with a low lexical tone and for words with a high-mid lexical tone at the end of questions. The ERP data thus show that, while semantic context helps in the eventual lexical identification, it makes the deviation of the contextually expected lexical tone from the actual acoustic signal more salient. © 2013 Published by Elsevier Ltd.

  20. An ERP study on Chinese natives' second language syntactic grammaticalization.

    PubMed

    Xue, Jin; Yang, Jie; Zhang, Jie; Qi, Zhenhai; Bai, Chen; Qiu, Yinchen

    2013-02-08

    The present study is concerned with how the Chinese learners of English grammaticalize different English syntactic rules. The ERPs (event related potentials) data were collected when participants performed English grammatical judgment. The experimental sentences varied in the degree of the similarity between the first language Chinese (L1) and the second language English (L2): (a) different in the L1 and the L2; (b) similar in the L1 and the L2; (c) unique to the L2. The P600 effect was found in L2 for structures that are similar in the L1 and the L2 and that are unique in L2, but there was no P600 effect of sentence type for the mismatch structures. The results indicate L1-L2 similarity and L2 proficiency interact in a complex way. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.