Sample records for spoken language analysis

  1. Spoken Grammar Awareness Raising: Does It Affect the Listening Ability of Iranian EFL Learners?

    ERIC Educational Resources Information Center

    Rashtchi, Mojgan; Afzali, Mahnaz

    2011-01-01

    Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and…

  2. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    ERIC Educational Resources Information Center

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  3. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  4. Looking beyond Signed English to Describe the Language of Two Deaf Children.

    ERIC Educational Resources Information Center

    Suty, Karen A.; Friel-Patti, Sandy

    1982-01-01

    Examines the spontaneous language of deaf children without forcing the analysis to fit the features of a spoken language system. Suggests linguistic competence of deaf children is commensurate with their cognitive age and is not adequately described by the standard spoken English language tests. (EKN)

  5. Corpus Based Authenicity Analysis of Language Teaching Course Books

    ERIC Educational Resources Information Center

    Peksoy, Emrah; Harmaoglu, Özhan

    2017-01-01

    In this study, the resemblance of the language learning course books used in Turkey to authentic language spoken by native speakers is explored by using a corpus-based approach. For this, the 10-million-word spoken part of the British National Corpus was selected as reference corpus. After that, all language learning course books used in high…

  6. Top Languages Spoken by English Language Learners Nationally and by State. ELL Information Center Fact Sheet Series. No. 3

    ERIC Educational Resources Information Center

    Batalova, Jeanne; McHugh, Margie

    2010-01-01

    While English Language Learner (ELL) students in the United States speak more than 150 languages, Spanish is by far the most common home or first language, but is not the top language spoken by ELLs in every state. This fact sheet, based on analysis of the U.S. Census Bureau's 2009 American Community Survey, documents the top languages spoken…

  7. One grammar or two? Sign Languages and the Nature of Human Language

    PubMed Central

    Lillo-Martin, Diane C; Gajewski, Jon

    2014-01-01

    Linguistic research has identified abstract properties that seem to be shared by all languages—such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality but also in the form of sign languages has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language—in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression. PMID:25013534

  8. An Analysis of a Language Test for Employment: The Authenticity of the PhonePass Test

    ERIC Educational Resources Information Center

    Chun, Christian W.

    2006-01-01

    This article presents an analysis of Ordinate Corporation's PhonePass Spoken English Test-10. The company promotes this product as being a useful assessment tool for screening job candidates' ability in spoken English. In the real-life domain of the work environment, one of the primary target language use tasks involves extended production…

  9. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention

    ERIC Educational Resources Information Center

    Medwetsky, Larry

    2011-01-01

    Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…

  10. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    PubMed

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Direction Asymmetries in Spoken and Signed Language Interpreting

    ERIC Educational Resources Information Center

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  12. Effects of early auditory experience on the spoken language of deaf children at 3 years of age.

    PubMed

    Nicholas, Johanna Grant; Geers, Ann E

    2006-06-01

    By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44 mo of age). Examination of the independent influence of these predictors through multiple regression analysis revealed that pre-implant-aided PTA threshold and duration of cochlear implant use (i.e., age at implant) accounted for 58% of the variance in Language Factor scores. A significant negative coefficient associated with pre-implant-aided threshold indicated that children with poorer hearing before implantation exhibited poorer language skills at age 3.5 yr. Likewise, a strong positive coefficient associated with duration of implant use indicated that children who had used their implant for a longer period of time (i.e., who were implanted at an earlier age) exhibited better language at age 3.5 yr. Age at identification and amplification was unrelated to language outcome, as was aided threshold with the cochlear implant. A significant quadratic trend in the relation between duration of implant use and language score revealed a steady increase in language skill (at age 3.5 yr) for each additional month of use of a cochlear implant after the first 12 mo of implant use. The advantage to language of longer implant use became more pronounced over time. Longer use of a cochlear implant in infancy and very early childhood dramatically affects the amount of spoken language exhibited by 3-yr-old, profoundly deaf children. In this sample, the amount of pre-implant intervention with a hearing aid was not related to language outcome at 3.5 yr of age. Rather, it was cochlear implantation at a younger age that served to promote spoken language competence. The previously identified language-facilitating factors of early identification of hearing impairment and early educational intervention may not be sufficient for optimizing spoken language of profoundly deaf children unless it leads to early cochlear implantation.

  13. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    ERIC Educational Resources Information Center

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  14. Simultaneous Communication Supports Learning in Noise by Cochlear Implant Users

    PubMed Central

    Blom, Helen C.; Marschark, Marc; Machmer, Elizabeth

    2017-01-01

    Objectives This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant users in noisy contexts. Methods Forty eight college students who were active cochlear implant users, watched videos of three short presentations, the text versions of which were standardized at the 8th grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants’ spoken language and sign language skills were obtained via self-reports and objective assessments. Results When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants’ receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Discussion Students who are cochlear implant users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those cochlear implant users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Conclusion Accompanying spoken language with signs can benefit learners who are cochlear implant users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined. PMID:28010675

  15. Simultaneous communication supports learning in noise by cochlear implant users.

    PubMed

    Blom, Helen; Marschark, Marc; Machmer, Elizabeth

    2017-01-01

    This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant (CI) users in noisy contexts. Forty eight college students who were active CI users, watched videos of three short presentations, the text versions of which were standardized at the 8 th -grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants' spoken language and sign language skills were obtained via self-reports and objective assessments. When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants' receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Students who are CI users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those CI users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Accompanying spoken language with signs can benefit learners who are CI users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined.

  16. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    PubMed

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity/type of hearing disorder, age of identification, and type of hearing technology. This review will provide evidence on the effectiveness of using sign language in combination with oral language therapies for developing spoken language in children with hearing loss who are identified at a young age. The information from this review can provide guidance to parents and intervention specialists, inform policy decisions and provide directions for future research. CRD42013005426.

  17. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  18. Grammar Is a System That Characterizes Talk in Interaction

    PubMed Central

    Ginzburg, Jonathan; Poesio, Massimo

    2016-01-01

    Much of contemporary mainstream formal grammar theory is unable to provide analyses for language as it occurs in actual spoken interaction. Its analyses are developed for a cleaned up version of language which omits the disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language. Using evidence from linguistics, conversation analysis, multimodal communication, psychology, language acquisition, and neuroscience, we show these aspects of language use are rule governed in much the same way as phenomena captured by conventional grammars. Furthermore, we argue that over the past few years some of the tools required to provide a precise characterizations of such phenomena have begun to emerge in theoretical and computational linguistics; hence, there is no reason for treating them as “second class citizens” other than pre-theoretical assumptions about what should fall under the purview of grammar. Finally, we suggest that grammar formalisms covering such phenomena would provide a better foundation not just for linguistic analysis of face-to-face interaction, but also for sister disciplines, such as research on spoken dialogue systems and/or psychological work on language acquisition. PMID:28066279

  19. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  20. On-Line Orthographic Influences on Spoken Language in a Semantic Task

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Dufau, Stephane; Ziegler, Johannes C.

    2009-01-01

    Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a…

  1. Predicting reading ability in teenagers who are deaf or hard of hearing: A longitudinal analysis of language and reading.

    PubMed

    Worsfold, Sarah; Mahon, Merle; Pimperton, Hannah; Stevenson, Jim; Kennedy, Colin

    2018-06-01

    Deaf and hard of hearing (D/HH) children and young people are known to show group-level deficits in spoken language and reading abilities relative to their hearing peers. However, there is little evidence on the longitudinal predictive relationships between language and reading in this population. To determine the extent to which differences in spoken language ability in childhood predict reading ability in D/HH adolescents. and procedures: Participants were drawn from a population-based cohort study and comprised 53 D/HH teenagers, who used spoken language, and a comparison group of 38 normally hearing teenagers. All had completed standardised measures of spoken language (expression and comprehension) and reading (accuracy and comprehension) at 6-10 and 13-19 years of age. and results: Forced entry stepwise regression showed that, after taking reading ability at age 8 years into account, language scores at age 8 years did not add significantly to the prediction of Reading Accuracy z-scores at age 17 years (change in R 2  = 0.01, p = .459) but did make a significant contribution to the prediction of Reading Comprehension z-scores at age 17 years (change in R 2  = 0.17, p < .001). and implications: In D/HH individuals who are spoken language users, expressive and receptive language skills in middle childhood predict reading comprehension ability in adolescence. Continued intervention to support language development beyond primary school has the potential to benefit reading comprehension and hence educational access for D/HH adolescents. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    PubMed Central

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language. PMID:24904497

  3. Overlapping Networks Engaged during Spoken Language Production and Its Cognitive Control

    PubMed Central

    Wise, Richard J.S.; Mehta, Amrish; Leech, Robert

    2014-01-01

    Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and “rest,” to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. PMID:24966373

  4. Overlapping networks engaged during spoken language production and its cognitive control.

    PubMed

    Geranmayeh, Fatemeh; Wise, Richard J S; Mehta, Amrish; Leech, Robert

    2014-06-25

    Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and "rest," to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. Copyright © 2014 Geranmayeh et al.

  5. Lexical Processing in Spanish Sign Language (LSE)

    ERIC Educational Resources Information Center

    Carreiras, Manuel; Gutierrez-Sigut, Eva; Baquero, Silvia; Corina, David

    2008-01-01

    Lexical access is concerned with how the spoken or visual input of language is projected onto the mental representations of lexical forms. To date, most theories of lexical access have been based almost exclusively on studies of spoken languages and/or orthographic representations of spoken languages. Relatively few studies have examined how…

  6. Scaling laws and model of words organization in spoken and written language

    NASA Astrophysics Data System (ADS)

    Bian, Chunhua; Lin, Ruokuang; Zhang, Xiaoyu; Ma, Qianli D. Y.; Ivanov, Plamen Ch.

    2016-01-01

    A broad range of complex physical and biological systems exhibits scaling laws. The human language is a complex system of words organization. Studies of written texts have revealed intriguing scaling laws that characterize the frequency of words occurrence, rank of words, and growth in the number of distinct words with text length. While studies have predominantly focused on the language system in its written form, such as books, little attention is given to the structure of spoken language. Here we investigate a database of spoken language transcripts and written texts, and we uncover that words organization in both spoken language and written texts exhibits scaling laws, although with different crossover regimes and scaling exponents. We propose a model that provides insight into words organization in spoken language and written texts, and successfully accounts for all scaling laws empirically observed in both language forms.

  7. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    PubMed

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  8. Interaction and common ground in dementia: Communication across linguistic and cultural diversity in a residential dementia care setting.

    PubMed

    Strandroos, Lisa; Antelius, Eleonor

    2017-09-01

    Previous research concerning bilingual people with a dementia disease has mainly focused on the importance of sharing a spoken language with caregivers. While acknowledging this, this article addresses the multidimensional character of communication and interaction. As using spoken language is made difficult as a consequence of the dementia disease, this multidimensionality becomes particularly important. The article is based on a qualitative analysis of ethnographic fieldwork at a dementia care facility. It presents ethnographic examples of different communicative forms, with particular focus on bilingual interactions. Interaction is understood as a collective and collaborative activity. The text finds that a shared spoken language is advantageous, but is not the only source of, nor a guarantee for, creating common ground and understanding. Communicative resources other than spoken language are for example body language, embodiment, artefacts and time. Furthermore, forms of communication are not static but develop, change and are created over time. Ability to communicate is thus not something that one has or has not, but is situationally and collaboratively created. To facilitate this, time and familiarity are central resources, and the results indicate the importance of continuity in interpersonal relations.

  9. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  10. The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing

    ERIC Educational Resources Information Center

    Gow, David W., Jr.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…

  11. Directionality Effects in Simultaneous Language Interpreting: The Case of Sign Language Interpreters in the Netherlands

    ERIC Educational Resources Information Center

    van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…

  12. Kaqchikel Maya Language Analysis Project

    ERIC Educational Resources Information Center

    Eddy de Pappa, Sarah

    2010-01-01

    The purpose of this analysis was to study the linguistic features of Kaqchikel, a Mayan language currently spoken in Guatemala and increasingly in the United States, in an effort to better prepare teachers of English as a second language (ESL) or English as a foreign language (EFL) to address the distinct needs of a frequently neglected and…

  13. Language and literacy development of deaf and hard-of-hearing children: successes and challenges.

    PubMed

    Lederberg, Amy R; Schick, Brenda; Spencer, Patricia E

    2013-01-01

    Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to early identification/intervention, advanced technologies (e.g., cochlear implants), and perceptually accessible language models. DHH children develop sign language in a similar manner as hearing children develop spoken language, provided they are in a language-rich environment. This occurs naturally for DHH children of deaf parents, who constitute 5% of the deaf population. For DHH children of hearing parents, sign language development depends on the age that they are exposed to a perceptually accessible 1st language as well as the richness of input. Most DHH children are born to hearing families who have spoken language as a goal, and such development is now feasible for many children. Some DHH children develop spoken language in bilingual (sign-spoken language) contexts. For the majority of DHH children, spoken language development occurs in either auditory-only contexts or with sign supports. Although developmental trajectories of DHH children with hearing parents have improved with early identification and appropriate interventions, the majority of children are still delayed compared with hearing children. These DHH children show particular weaknesses in the development of grammar. Language deficits and differences have cascading effects in language-related areas of development, such as theory of mind and literacy development.

  14. Research on Spoken Dialogue Systems

    NASA Technical Reports Server (NTRS)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  15. The Listening and Spoken Language Data Repository: Design and Project Overview

    ERIC Educational Resources Information Center

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  16. The Hebrew CHILDES corpus: transcription and morphological analysis

    PubMed Central

    Albert, Aviad; MacWhinney, Brian; Nir, Bracha

    2014-01-01

    We present a corpus of transcribed spoken Hebrew that reflects spoken interactions between children and adults. The corpus is an integral part of the CHILDES database, which distributes similar corpora for over 25 languages. We introduce a dedicated transcription scheme for the spoken Hebrew data that is sensitive to both the phonology and the standard orthography of the language. We also introduce a morphological analyzer that was specifically developed for this corpus. The analyzer adequately covers the entire corpus, producing detailed correct analyses for all tokens. Evaluation on a new corpus reveals high coverage as well. Finally, we describe a morphological disambiguation module that selects the correct analysis of each token in context. The result is a high-quality morphologically-annotated CHILDES corpus of Hebrew, along with a set of tools that can be applied to new corpora. PMID:25419199

  17. AI-Based Chatterbots and Spoken English Teaching: A Critical Analysis

    ERIC Educational Resources Information Center

    Sha, Guoquan

    2009-01-01

    The aim of various approaches implemented, whether the classical "three Ps" (presentation, practice, and production) or communicative language teaching (CLT), is to achieve communicative competence. Although a lot of software developed for teaching spoken English is dressed up to raise interaction, its methodology is largely rooted in tradition.…

  18. AG Bell Academy Certification Program for Listening and Spoken Language Specialists: Meeting a World-Wide Need for Qualified Professionals

    ERIC Educational Resources Information Center

    Goldberg, Donald M.; Dickson, Cheryl L.; Flexer, Carol

    2010-01-01

    This article discusses the AG Bell Academy for Listening and Spoken Language--an organization designed to build capacity of certified Listening and Spoken Language Specialists (LSLS) by defining and maintaining a set of professional standards for LSLS professionals and thereby addressing the global deficit of qualified LSLS. Definitions and…

  19. Method for automatic measurement of second language speaking proficiency

    NASA Astrophysics Data System (ADS)

    Bernstein, Jared; Balogh, Jennifer

    2005-04-01

    Spoken language proficiency is intuitively related to effective and efficient communication in spoken interactions. However, it is difficult to derive a reliable estimate of spoken language proficiency by situated elicitation and evaluation of a person's communicative behavior. This paper describes the task structure and scoring logic of a group of fully automatic spoken language proficiency tests (for English, Spanish and Dutch) that are delivered via telephone or Internet. Test items are presented in spoken form and require a spoken response. Each test is automatically-scored and primarily based on short, decontextualized tasks that elicit integrated listening and speaking performances. The tests present several types of tasks to candidates, including sentence repetition, question answering, sentence construction, and story retelling. The spoken responses are scored according to the lexical content of the response and a set of acoustic base measures on segments, words and phrases, which are scaled with IRT methods or parametrically combined to optimize fit to human listener judgments. Most responses are isolated spoken phrases and sentences that are scored according to their linguistic content, their latency, and their fluency and pronunciation. The item development procedures and item norming are described.

  20. Language as a multimodal phenomenon: implications for language learning, processing and evolution

    PubMed Central

    Vigliocco, Gabriella; Perniss, Pamela; Vinson, David

    2014-01-01

    Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. PMID:25092660

  1. Conversation Analysis--A Discourse Approach to Teaching Oral English Skills

    ERIC Educational Resources Information Center

    Wu, Yan

    2013-01-01

    This paper explores a pedagocial approach to teaching oral English---Conversation Analysis. First, features of spoken language is described in comparison to written language. Second, Conversation Analysis theory is elaborated in terms of adjacency pairs, turn-taking, repairs, sequences, openings and closings, and feedback. Third, under the…

  2. "Visual" Cortex Responds to Spoken Language in Blind Children.

    PubMed

    Bedny, Marina; Richardson, Hilary; Saxe, Rebecca

    2015-08-19

    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.

  3. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    PubMed

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently and significantly improve the achievement of children with hearing loss in spoken language skills.

  4. Modality and morphology: what we write may not be what we say.

    PubMed

    Rapp, Brenda; Fischer-Baum, Simon; Miozzo, Michele

    2015-06-01

    Written language is an evolutionarily recent human invention; consequently, its neural substrates cannot be determined by the genetic code. How, then, does the brain incorporate skills of this type? One possibility is that written language is dependent on evolutionarily older skills, such as spoken language; another is that dedicated substrates develop with expertise. If written language does depend on spoken language, then acquired deficits of spoken and written language should necessarily co-occur. Alternatively, if at least some substrates are dedicated to written language, such deficits may doubly dissociate. We report on 5 individuals with aphasia, documenting a double dissociation in which the production of affixes (e.g., the -ing in jumping) is disrupted in writing but not speaking or vice versa. The findings reveal that written- and spoken-language systems are considerably independent from the standpoint of morpho-orthographic operations. Understanding this independence of the orthographic system in adults has implications for the education and rehabilitation of people with written-language deficits. © The Author(s) 2015.

  5. THE STRUCTURE OF AYACUCHO QUECHUA.

    ERIC Educational Resources Information Center

    PARKER, GARY J.; SOLA, DONALD F.

    THIS LINGUISTIC DESCRIPTION OF AYACUCHO QUECHUA IS INTENDED TO BE A FAIRLY COMPLETE ANALYSIS OF THE SPOKEN LANGUAGE. USED WITH THE AUTHORS' SPOKEN AYACUCHO QUECHUA COURSE, IT IS A COMPREHENSIVE REFERENCE WORK FOR THE STUDENT AS WELL AS A CONTRIBUTION TO THE FIELD OF DESCRIPTIVE LINGUISTICS. BECAUSE OF THE HIGH DEGREE OF INFLECTION AND SYNTACTIC…

  6. Development of Lexical-Semantic Language System: N400 Priming Effect for Spoken Words in 18- and 24-Month Old Children

    ERIC Educational Resources Information Center

    Rama, Pia; Sirri, Louah; Serres, Josette

    2013-01-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…

  7. A Platform for Multilingual Research in Spoken Dialogue Systems

    DTIC Science & Technology

    2000-08-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010384 TITLE: A Platform for Multilingual Research in Spoken Dialogue...Ronald A. Cole*, Ben Serridge§, John-Paul Hosomý, Andrew Cronke, and Ed Kaiser* *Center for Spoken Language Understanding ; University of Colorado...Boulder; Boulder, CO, 80309, USA §Universidad de las Americas; 72820 Santa Catarina Martir; Puebla, Mexico *Center for Spoken Language Understanding (CSLU

  8. The missing foundation in teacher education: Knowledge of the structure of spoken and written language.

    PubMed

    Moats, L C

    1994-01-01

    Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.

  9. Early Sign Language Exposure and Cochlear Implantation Benefits.

    PubMed

    Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S

    2017-07-01

    Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.

  10. Subarashii: Encounters in Japanese Spoken Language Education.

    ERIC Educational Resources Information Center

    Bernstein, Jared; Najmi, Amir; Ehsani, Farzad

    1999-01-01

    Describes Subarashii, an experimental computer-based interactive spoken-language education system designed to understand what a student is saying in Japanese and respond in a meaningful way in spoken Japanese. Implementation of a preprototype version of the Subarashii system identified strengths and limitations of continuous speech recognition…

  11. Building Spoken Language in the First Plane

    ERIC Educational Resources Information Center

    Bettmann, Joen

    2016-01-01

    Through a strong Montessori orientation to the parameters of spoken language, Joen Bettmann makes the case for "materializing" spoken knowledge using the stimulation of real objects and real situations that promote mature discussion around the sensorial aspect of the prepared environment. She lists specific materials in the classroom…

  12. Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.

    PubMed

    Brimo, Danielle; Lund, Emily; Sapp, Alysha

    2018-05-01

    Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below-average reading comprehension, but the syntax construct, awareness or knowledge, did. Thus, when selecting how to measure syntax among school-age children, researchers and practitioners should evaluate whether they are measuring children's awareness of spoken syntax or knowledge of spoken syntax. Other differences, such as participant diagnosis and the format of items on the spoken-syntax assessments, also were discussed as possible explanations for why researchers found that children with average and below-average reading comprehension did not score significantly differently on spoken-syntax assessments. © 2017 Royal College of Speech and Language Therapists.

  13. The Relationship Between Second Language Anxiety and International Nursing Students Stress

    ERIC Educational Resources Information Center

    Khawaja, Nigar G.; Chan, Sabrina; Stein, Georgia

    2017-01-01

    We examined the relationship between second language anxiety and international nursing student stress after taking into account the demographic, cognitive, and acculturative factors. International nursing students (N = 152) completed an online questionnaire battery. Hierarchical regression analysis revealed that spoken second language anxiety and…

  14. A Grammar of Bih

    ERIC Educational Resources Information Center

    Nguyen, Tam Thi Minh

    2013-01-01

    Bih is a Chamic (Austronesian) language spoken by approximately 500 people in the Southern highlands of Vietnam. This dissertation is the first descriptive grammar of the language, based on extensive fieldwork and community-based language documentation in Vietnam and written from a functional/typological perspective. The analysis in this work is…

  15. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    PubMed

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. Unique Auditory Language-Learning Needs of Hearing-Impaired Children: Implications for Intervention.

    ERIC Educational Resources Information Center

    Johnson, Barbara Ann; Paterson, Marietta M.

    Twenty-seven hearing-impaired young adults with hearing potentially usable for language comprehension and a history of speech language therapy participated in this study of training in using residual hearing for the purpose of learning spoken language. Evaluation of their recalled therapy experiences indicated that listening to spoken language did…

  17. The Road to Language Learning Is Not Entirely Iconic: Iconicity, Neighborhood Density, and Frequency Facilitate Acquisition of Sign Language.

    PubMed

    Caselli, Naomi K; Pyers, Jennie E

    2017-07-01

    Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.

  18. Language as a multimodal phenomenon: implications for language learning, processing and evolution.

    PubMed

    Vigliocco, Gabriella; Perniss, Pamela; Vinson, David

    2014-09-19

    Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Lexical Analysis to Enhance Man/Machine Interaction: Simplifying and Improving the Creation of Software. Final Report.

    ERIC Educational Resources Information Center

    Hutchins, Sandra E.

    By analyzing the lexicology of natural language (English or other languages as they are commonly spoken or written), as compared to computer languages, this study explored the extent to which syntactic and semantic levels of linguistic analysis can be implemented and effectively used on microcomputers. In Phase I of the study, the Apple IIe with…

  20. The Poetics of Argumentation: The Relevance of Conversational Repetition for Two Theories of Emergent Mathematical Reasoning

    ERIC Educational Resources Information Center

    Staats, Susan

    2017-01-01

    Poetic structures emerge in spoken language when speakers repeat grammatical phrases that were spoken before. They create the potential to amend or comment on previous speech, and to convey meaning through the structure of discourse. This paper considers the ways in which poetic structure analysis contributes to two perspectives on emergent…

  1. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    PubMed

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. An Investigation into the State of Status Planning of Tiv Language of Central Nigeria

    ERIC Educational Resources Information Center

    Terkimbi, Atonde

    2016-01-01

    The Tiv language is one of the major languages spoken in central Nigeria. The language is of the Benue-Congo subclass of the Bantu parent family. It has over four million speakers spoken in five states of Nigeria. The language like many other Nigerian languages is in dire need of language planning efforts and strategies. Some previous efforts were…

  3. Using Children's Folksongs to Transition Beginning Readers from the Familiar Structure of Oral Language to the Structure of Written Language.

    ERIC Educational Resources Information Center

    Rietz, Sandra A.

    Children will meet one less obstacle to making the transition from spoken to written fluency in language if, during the transition period, they experience written language that corresponds structurally to their spoken language patterns. Familiar children's folksongs, because they contain some of the structure of children's oral language, provide…

  4. Standardization of the Revised Token Test in Bangla

    ERIC Educational Resources Information Center

    Kumar, Suman; Kumar, Prashant; Kumari, Punam

    2013-01-01

    Bengali or Bangla is an Indo-Aryan language. It is the state language of West Bengal and Tripura and also spoken in some parts of Assam. Bangla is the official language of Bangladesh. With nearly 230 million speakers (Wikipedia 2010), Bangla is one of the most spoken language in the world. Bangla language is the most commonly used language in West…

  5. On the Conventionalization of Mouth Actions in Australian Sign Language.

    PubMed

    Johnston, Trevor; van Roekel, Jane; Schembri, Adam

    2016-03-01

    This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.

  6. Honoring the Child with Dyslexia in a Montessori Classroom

    ERIC Educational Resources Information Center

    Skotheim, Meghan Kane

    2009-01-01

    Speaking, listening, reading, and writing are all language activities. The human capacity for speaking and listening has a biological foundation: wherever there are people, there is spoken language. Acquiring spoken language is an unconscious activity, and, barring any physical deformity or language learning disability, like severe autism, all…

  7. Standardisation a Considerable Force behind Language Death: A Case of Shona

    ERIC Educational Resources Information Center

    Mhute, Isaac

    2016-01-01

    The paper assesses the contribution of standardisation towards language death taking Clement Doke's resolutions on the various Shona dialects as a case study. It is a qualitative analysis of views gathered from speakers of the language situated in various provinces of Zimbabwe, the country in which the language is spoken by around 75% of the…

  8. Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects

    ERIC Educational Resources Information Center

    Wiseheart, Rebecca; Altmann, Lori J. P.

    2018-01-01

    Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…

  9. Relationships between Lexical Processing Speed, Language Skills, and Autistic Traits in Children

    ERIC Educational Resources Information Center

    Abrigo, Erin

    2012-01-01

    According to current models of spoken word recognition listeners understand speech as it unfolds over time. Eye tracking provides a non-invasive, on-line method to monitor attention, providing insight into the processing of spoken language. In the current project a spoken lexical processing assessment (LPA) confirmed current theories of spoken…

  10. The Development of Spoken Language in Deaf Children: Explaining the Unexplained Variance.

    ERIC Educational Resources Information Center

    Musselman, Carol; Kircaali-Iftar, Gonul

    1996-01-01

    This study compared 20 young deaf children with either exceptionally good or exceptionally poor spoken language for their hearing loss, age, and intelligence. Factors associated with high performance included earlier use of binaural ear-level aids, better educated mothers, auditory/verbal or auditory/oral instruction, reliance on spoken language…

  11. Adaptation and Assessment of a Public Speaking Rating Scale

    ERIC Educational Resources Information Center

    Iberri-Shea, Gina

    2017-01-01

    Prominent spoken language assessments such as the Oral Proficiency Interview and the Test of Spoken English have been primarily concerned with speaking ability as it relates to conversation. This paper looks at an additional aspect of spoken language ability, namely public speaking. This study used an adapted form of a public speaking rating scale…

  12. Drop Everything and Write (DEAW): An Innovative Program to Improve Literacy Skills

    ERIC Educational Resources Information Center

    Joshi, R. Malatesha; Aaron, P. G.; Hill, Nancy; Ocker Dean, Emily; Boulware-Gooden, Regina; Rupley, William H.

    2008-01-01

    It is believed that language is an innate ability and, therefore, spoken language is acquired naturally and informally. In contrast, written language is thought to be an invention and, therefore, has to be learned through formal instruction. An alternate view, however, is that spoken language and written language are two forms of manifestations of…

  13. Teachers' perceptions of promoting sign language phonological awareness in an ASL/English bilingual program.

    PubMed

    Crume, Peter K

    2013-10-01

    The National Reading Panel emphasizes that spoken language phonological awareness (PA) developed at home and school can lead to improvements in reading performance in young children. However, research indicates that many deaf children are good readers even though they have limited spoken language PA. Is it possible that some deaf students benefit from teachers who promote sign language PA instead? The purpose of this qualitative study is to examine teachers' beliefs and instructional practices related to sign language PA. A thematic analysis is conducted on 10 participant interviews at an ASL/English bilingual school for the deaf to understand their views and instructional practices. The findings reveal that the participants had strong beliefs in developing students' structural knowledge of signs and used a variety of instructional strategies to build students' knowledge of sign structures in order to promote their language and literacy skills.

  14. The employment of a spoken language computer applied to an air traffic control task.

    NASA Technical Reports Server (NTRS)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  15. Spoken Language and Mathematics.

    ERIC Educational Resources Information Center

    Raiker, Andrea

    2002-01-01

    States teachers/learners use spoken language in a three part mathematics lesson advocated by the British National Numeracy Strategy. Recognizes language's importance by emphasizing correct use of mathematical vocabulary in raising standards. Finds pupils and teachers appear to ascribe different meanings to scientific words because of their…

  16. The Temporal Structure of Spoken Language Understanding.

    ERIC Educational Resources Information Center

    Marslen-Wilson, William; Tyler, Lorraine Komisarjevsky

    1980-01-01

    An investigation of word-by-word time-course of spoken language understanding focused on word recognition and structural and interpretative processes. Results supported an online interactive language processing theory, in which lexical, structural, and interpretative knowledge sources communicate and interact during processing efficiently and…

  17. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  18. Visual Sonority Modulates Infants' Attraction to Sign Language

    ERIC Educational Resources Information Center

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  19. Spoken Language Production in Young Adults: Examining Syntactic Complexity

    ERIC Educational Resources Information Center

    Nippold, Marilyn A.; Frantz-Kaspar, Megan W.; Vigeland, Laura M.

    2017-01-01

    Purpose: In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language…

  20. The Primacy of Language Mixing: The Effects of a Matrix System.

    ERIC Educational Resources Information Center

    Field, Fredric

    1999-01-01

    Focuses on the differences between bilingual mixtures and creoles. In both types of language, elements and structures of two or more distinct languages are intermingled. By contrasting Nahuatl, spoken in Central Mexico, with Palenquero, a Spanish-based creole spoken near the Caribbean coast of Colombia, examines two components of language thought…

  1. Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders

    PubMed Central

    Weismer, Susan Ellis

    2015-01-01

    Purpose Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. Method The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. Results The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. Conclusion The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth. PMID:26254475

  2. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    PubMed

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Are Young Children with Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    ERIC Educational Resources Information Center

    Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…

  4. Language Development in Children with Language Disorders: An Introduction to Skinner's Verbal Behavior and the Techniques for Initial Language Acquisition

    ERIC Educational Resources Information Center

    Casey, Laura Baylot; Bicard, David F.

    2009-01-01

    Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…

  5. Written language impairments in primary progressive aphasia: a reflection of damage to central semantic and phonological processes.

    PubMed

    Henry, Maya L; Beeson, Pélagie M; Alexander, Gene E; Rapcsak, Steven Z

    2012-02-01

    Connectionist theories of language propose that written language deficits arise as a result of damage to semantic and phonological systems that also support spoken language production and comprehension, a view referred to as the "primary systems" hypothesis. The objective of the current study was to evaluate the primary systems account in a mixed group of individuals with primary progressive aphasia (PPA) by investigating the relation between measures of nonorthographic semantic and phonological processing and written language performance and by examining whether common patterns of cortical atrophy underlie impairments in spoken versus written language domains. Individuals with PPA and healthy controls were administered a language battery, including assessments of semantics, phonology, reading, and spelling. Voxel-based morphometry was used to examine the relation between gray matter volumes and language measures within brain regions previously implicated in semantic and phonological processing. In accordance with the primary systems account, our findings indicate that spoken language performance is strongly predictive of reading/spelling profile in individuals with PPA and suggest that common networks of critical left hemisphere regions support central semantic and phonological processes recruited for spoken and written language.

  6. Evaluating Corpus Literacy Training for Pre-Service Language Teachers: Six Case Studies

    ERIC Educational Resources Information Center

    Heather, Julian; Helt, Marie

    2012-01-01

    Corpus literacy is the ability to use corpora--large, principled databases of spoken and written language--for language analysis and instruction. While linguists have emphasized the importance of corpus training in teacher preparation programs, few studies have investigated the process of initiating teachers into corpus literacy with the result…

  7. Talk or Chat? Chatroom and Spoken Interaction in a Language Classroom

    ERIC Educational Resources Information Center

    Hamano-Bunce, Douglas

    2011-01-01

    This paper describes a study comparing chatroom and face-to-face oral interaction for the purposes of language learning in a tertiary classroom in the United Arab Emirates. It uses transcripts analysed for Language Related Episodes, collaborative dialogues, thought to be externally observable examples of noticing in action. The analysis is…

  8. Online and Face-to-Face Language Learning: A Comparative Analysis of Oral Proficiency in Introductory Spanish

    ERIC Educational Resources Information Center

    Moneypenny, Dianne Burke; Aldrich, Rosalie S.

    2016-01-01

    The primary resistance to online foreign language teaching often involves questions of spoken mastery of second language. In order to address this concern, this research comparatively assesses undergraduate students' oral proficiency in online and face-to-face Spanish classes, while taking into account students' previous second language…

  9. Analysis of Spoken Narratives in a Marathi-Hindi-English Multilingual Aphasic Patient

    ERIC Educational Resources Information Center

    Karbhari-Adhyaru, Medha

    2010-01-01

    In a multilingual country such as India, the probability that clinicians may not have command over different languages used by aphasic patients is very high. Since formal tests in different languages are limited, assessment of people from diverse linguistic backgrounds presents speech- language pathologists with many challenges. With a view to…

  10. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    ERIC Educational Resources Information Center

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  11. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language.

    PubMed

    Williams, Joshua T; Newman, Sharlene D

    2017-02-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.

  12. Spoken language development in children following cochlear implantation.

    PubMed

    Niparko, John K; Tobey, Emily A; Thal, Donna J; Eisenberg, Laurie S; Wang, Nae-Yuh; Quittner, Alexandra L; Fink, Nancy E

    2010-04-21

    Cochlear implantation is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe to profound sensorineural hearing loss (SNHL). To prospectively assess spoken language acquisition following cochlear implantation in young children. Prospective, longitudinal, and multidimensional assessment of spoken language development over a 3-year period in children who underwent cochlear implantation before 5 years of age (n = 188) from 6 US centers and hearing children of similar ages (n = 97) from 2 preschools recruited between November 2002 and December 2004. Follow-up completed between November 2005 and May 2008. Performance on measures of spoken language comprehension and expression (Reynell Developmental Language Scales). Children undergoing cochlear implantation showed greater improvement in spoken language performance (10.4; 95% confidence interval [CI], 9.6-11.2 points per year in comprehension; 8.4; 95% CI, 7.8-9.0 in expression) than would be predicted by their preimplantation baseline scores (5.4; 95% CI, 4.1-6.7, comprehension; 5.8; 95% CI, 4.6-7.0, expression), although mean scores were not restored to age-appropriate levels after 3 years. Younger age at cochlear implantation was associated with significantly steeper rate increases in comprehension (1.1; 95% CI, 0.5-1.7 points per year younger) and expression (1.0; 95% CI, 0.6-1.5 points per year younger). Similarly, each 1-year shorter history of hearing deficit was associated with steeper rate increases in comprehension (0.8; 95% CI, 0.2-1.2 points per year shorter) and expression (0.6; 95% CI, 0.2-1.0 points per year shorter). In multivariable analyses, greater residual hearing prior to cochlear implantation, higher ratings of parent-child interactions, and higher socioeconomic status were associated with greater rates of improvement in comprehension and expression. The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their preimplantation scores.

  13. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    ERIC Educational Resources Information Center

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  14. The Language Development of a Deaf Child with a Cochlear Implant

    ERIC Educational Resources Information Center

    Mouvet, Kimberley; Matthijs, Liesbeth; Loots, Gerrit; Taverniers, Miriam; Van Herreweghe, Mieke

    2013-01-01

    Hearing parents of deaf or partially deaf infants are confronted with the complex question of communication with their child. This question is complicated further by conflicting advice on how to address the child: in spoken language only, in spoken language supported by signs, or in signed language. This paper studies the linguistic environment…

  15. Contribution of Implicit Sequence Learning to Spoken Language Processing: Some Preliminary Findings with Hearing Adults

    ERIC Educational Resources Information Center

    Conway, Christopher M.; Karpicke, Jennifer; Pisoni, David B.

    2007-01-01

    Spoken language consists of a complex, sequentially arrayed signal that contains patterns that can be described in terms of statistical relations among language units. Previous research has suggested that a domain-general ability to learn structured sequential patterns may underlie language acquisition. To test this prediction, we examined the…

  16. Nuffield Early Language Intervention: Evaluation Report and Executive Summary

    ERIC Educational Resources Information Center

    Sibieta, Luke; Kotecha, Mehul; Skipp, Amy

    2016-01-01

    The Nuffield Early Language Intervention is designed to improve the spoken language ability of children during the transition from nursery to primary school. It is targeted at children with relatively poor spoken language skills. Three sessions per week are delivered to groups of two to four children starting in the final term of nursery and…

  17. Lexical Competition during Second-Language Listening: Sentence Context, but Not Proficiency, Constrains Interference from the Native Lexicon

    ERIC Educational Resources Information Center

    Chambers, Craig G.; Cooke, Hilary

    2009-01-01

    A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., "Marie va decrire la poule" [Marie will…

  18. The Juridical Defence of Rhaeto-Romansh Languages, with Particular Reference to the Friulian Case. Mercator Working Papers.

    ERIC Educational Resources Information Center

    Cisilino, William

    Rhaeto-Romansh is a Neo-Latin language with three varieties. Occidental Rhaeto-Romansh (Romansh) is spoken in Switzerland, in the Canton of the Grisons. Central Rhaeto-Romansh (Dolomite Ladin) is spoken in some of the Italian Dolomite valleys, in the Province of Belluno, Bozen/Bolzano, and Trento. Oriental Rhaeto-Romansh (Friulian) is spoken in…

  19. Spoken language outcomes after hemispherectomy: factoring in etiology.

    PubMed

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  20. Cognitive aging and hearing acuity: modeling spoken language comprehension.

    PubMed

    Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.

  1. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    PubMed

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    PubMed Central

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success. PMID:26866283

  3. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    ERIC Educational Resources Information Center

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  4. Parent Telegraphic Speech Use and Spoken Language in Preschoolers with ASD

    ERIC Educational Resources Information Center

    Venker, Courtney E.; Bolt, Daniel M.; Meyer, Allison; Sindberg, Heidi; Weismer, Susan Ellis; Tager-Flusberg, Helen

    2015-01-01

    Purpose: There is considerable controversy regarding whether to use telegraphic or grammatical input when speaking to young children with language delays, including children with autism spectrum disorder (ASD). This study examined telegraphic speech use in parents of preschoolers with ASD and associations with children's spoken language 1 year…

  5. Spoken Language Development in Oral Preschool Children with Permanent Childhood Deafness

    ERIC Educational Resources Information Center

    Sarant, Julia Z.; Holt, Colleen M.; Dowell, Richard C.; Rickards, Field W.

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were…

  6. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  7. Elaboration and Simplification in Spanish Discourse

    ERIC Educational Resources Information Center

    Granena, Gisela

    2008-01-01

    This article compares spoken discourse models in Spanish as a second language textbooks and online language learning resources with naturally occurring conversations. Telephone service encounters are analyzed from the point of view of three different dimensions of authenticity: linguistic, sociolinguistic, and psycholinguistic. An analysis of 20…

  8. Acquisition of graphic communication by a young girl without comprehension of spoken language.

    PubMed

    von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R

    To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.

  9. Parallel versus Serial Processing Dependencies in the Perisylvian Speech Network: A Granger Analysis of Intracranial EEG Data

    ERIC Educational Resources Information Center

    Gow, David W., Jr.; Keller, Corey J.; Eskandar, Emad; Meng, Nate; Cash, Sydney S.

    2009-01-01

    In this work, we apply Granger causality analysis to high spatiotemporal resolution intracranial EEG (iEEG) data to examine how different components of the left perisylvian language network interact during spoken language perception. The specific focus is on the characterization of serial versus parallel processing dependencies in the dominant…

  10. Bimodal Bilingual Language Development of Hearing Children of Deaf Parents

    ERIC Educational Resources Information Center

    Hofmann, Kristin; Chilla, Solveig

    2015-01-01

    Adopting a bimodal bilingual language acquisition model, this qualitative case study is the first in Germany to investigate the spoken and sign language development of hearing children of deaf adults (codas). The spoken language competence of six codas within the age range of 3;10 to 6;4 is assessed by a series of standardised tests (SETK 3-5,…

  11. Orthographic effects in spoken word recognition: Evidence from Chinese.

    PubMed

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  12. Spoken Language Development in Children Following Cochlear Implantation

    PubMed Central

    Niparko, John K.; Tobey, Emily A.; Thal, Donna J.; Eisenberg, Laurie S.; Wang, Nae-Yuh; Quittner, Alexandra L.; Fink, Nancy E.

    2010-01-01

    Context Cochlear implantation (CI) is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe-to-profound sensorineural hearing loss (SNHL). Objective To prospectively assess spoken language acquisition following CI in young children with adjustment of co-variates. Design, Setting, and Participants Prospective, longitudinal, and multidimensional assessment of spoken language growth over a 3-year period following CI. Prospective cohort study of children who underwent CI before 5 years of age (n=188) from 6 US centers and hearing children of similar ages (n=97) from 2 preschools recruited between November, 2002 and December, 2004. Follow-up completed between November, 2005 and May, 2008. Main Outcome Measures Performance on measures of spoken language comprehension and expression. Results Children undergoing CI showed greater growth in spoken language performance (10.4;[95% confidence interval: 9.6–11.2] points/year in comprehension; 8.4;[7.8–9.0] in expression) than would be predicted by their pre-CI baseline scores (5.4;[4.1–6.7] comprehension; 5.8;[4.6–7.0] expression). Although mean scores were not restored to age-appropriate levels after 3 years, significantly greater annual rates of language acquisition were observed in children who were younger at CI (1.1;[0.5–1.7] points in comprehension per year younger; 1.0;[0.6–1.5] in expression), and in children with shorter histories of hearing deficit (0.8;[0.2,1.2] points in comprehension per year shorter; 0.6;[0.2–1.0] for expression). In multivariable analyses, greater residual hearing prior to CI, higher ratings of parent-child interactions, and higher SES associated with greater rates of growth in comprehension and expression. Conclusions The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their pre-implantation scores. However, discrepancies between participants’ chronologic and language age persisted after CI, underscoring the importance of early CI in appropriately selected candidates. PMID:20407059

  13. Reliability and validity of the C-BiLLT: a new instrument to assess comprehension of spoken language in young children with cerebral palsy and complex communication needs.

    PubMed

    Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J

    2014-09-01

    In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.

  14. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    ERIC Educational Resources Information Center

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  15. User-Centred Design for Chinese-Oriented Spoken English Learning System

    ERIC Educational Resources Information Center

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  16. Phonological Awareness in Mandarin of Chinese and Americans

    ERIC Educational Resources Information Center

    Hu, Min

    2009-01-01

    Phonological awareness (PA) is the ability to analyze spoken language into its component sounds and to manipulate these smaller units. Literature review related to PA shows that a variety of factor groups play a role in PA in Mandarin such as linguistic experience (spoken language, alphabetic literacy, and second language learning), item type,…

  17. A Mother Tongue Spoken Mainly by Fathers.

    ERIC Educational Resources Information Center

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  18. Understanding Communication among Deaf Students Who Sign and Speak: A Trivial Pursuit?

    ERIC Educational Resources Information Center

    Marschark, Marc; Convertino, Carol M.; Macias, Gayle; Monikowski, Christine M.; Sapere, Patricia; Seewagen, Rosemarie

    2007-01-01

    Classroom communication between deaf students was modeled using a question-and-answer game. Participants consisted of student pairs that relied on spoken language, pairs that relied on American Sign Language (ASL), and mixed pairs in which one student used spoken language and one signed. Although the task encouraged students to request…

  19. Multiclausal Utterances Aren't Just for Big Kids: A Framework for Analysis of Complex Syntax Production in Spoken Language of Preschool- and Early School-Age Children

    ERIC Educational Resources Information Center

    Arndt, Karen Barako; Schuele, C. Melanie

    2013-01-01

    Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…

  20. An Empirical Study of the Dominating Predictive Features of Spoken Language in a Representative Sample of School Pupils: A Multivariate Description and Analysis of Oral Language Development.

    ERIC Educational Resources Information Center

    Marascuilo, Leonard A.; Loban, Walter

    To determine whether language behavior represents an early conditioned verbal response or whether it changes with age and experience was the purpose of this study which attempted to define unique isolates of language on the basis of actual language produced by young children. Tape recorded data were collected for 12 years from 211 children in…

  1. Assisting Native Americans in Assuring the Survival and Continuing Vitality of Their Languages. Report To Accompany S. 2044. Senate, 102d Congress, 2d Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Select Committee on Indian Affairs.

    Past U.S. policies toward Indian and other Native American languages have attempted to suppress the use of the languages in government-operated Indian schools for assimilating Indian children. About 155 Native languages are spoken today in the United States, but only 20 are spoken by people of all ages. The Native American Languages Act of 1990…

  2. What You Don't Know Can Hurt You: The Risk of Language Deprivation by Impairing Sign Language Development in Deaf Children.

    PubMed

    Hall, Wyatte C

    2017-05-01

    A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.

  3. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    ERIC Educational Resources Information Center

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  4. The Attitudes and Motivation of Children towards Learning Rarely Spoken Foreign Languages: A Case Study from Saudi Arabia

    ERIC Educational Resources Information Center

    Al-Nofaie, Haifa

    2018-01-01

    This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…

  5. Language and the Medial Temporal Lobe: Evidence from H. M.'s Spontaneous Discourse

    ERIC Educational Resources Information Center

    Skotko, Brian G.; Andrews, Edna; Einstein, Gillian

    2005-01-01

    Previous researchers have found it challenging to disentangle the memory and language capabilities of the famous amnesic patient H. M. Here, we present an original linguistic analysis of H. M. based on empirical data drawing upon novel spoken discourse with him. The results did not uncover the language deficits noted previously. Instead, H. M.'s…

  6. Prosodic Reversal in Dogrib (Weledeh Dialect)

    ERIC Educational Resources Information Center

    Jaker, Alessandro Michelangelo

    2012-01-01

    This thesis presents a comprehensive phonological analysis of the Weledeh dialect of Dogrib, a Northern Athabaskan language spoken in the Northwest Territories, Canada, based on the author's own fieldwork. The phonology of Northern Athabaskan languages, and Dogrib in particular, has to date been regarded as highly irregular, and subject to…

  7. Contribution of spoken language and socio-economic background to adolescents' educational achievement at age 16 years.

    PubMed

    Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert

    2017-03-01

    Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and socio-economic background during adolescence, particularly in populations without language impairment. To investigate: (1) whether adolescents with higher educational outcomes overall had higher language abilities; and (2) associations between adolescent language ability, socio-economic background and educational outcomes, specifically in relation to Mathematics, English Language and English Literature GCSE grade. A total of 151 participants completed five standardized language assessments measuring vocabulary, comprehension of sentences and spoken paragraphs, and narrative skills and one nonverbal assessment when between 13 and 14 years old. These data were compared with the participants' educational achievement obtained upon leaving secondary education (16 years old). Univariate logistic regressions were employed to identify those language assessments and demographic factors that were associated with achieving a targeted A * -C grade in English Language, English Literature and Mathematics General Certificate of Secondary Education (GCSE) at 16 years. Further logistic regressions were then conducted to examine further the contribution of socio-economic background and spoken language skills in the multivariate models. Vocabulary, comprehension of sentences and spoken paragraphs, and mean length utterance in a narrative task along with socio-economic background contributed to whether participants achieved an A * -C grade in GCSE Mathematics and English Language and English Literature. Nonverbal ability contributed to English Language and Mathematics. The results of multivariate logistic regressions then found that vocabulary skills were particularly relevant to all three GCSE outcomes. Socio-economic background only remained important for English Language, once language assessment scores and demographic information were considered. Language ability, and in particular vocabulary, plays an important role for educational achievement. Results confirm a need for ongoing support for spoken language ability throughout secondary education and a potential role for speech and language therapy provision in the continuing drive to reduce the gap in educational attainment between groups from differing socio-economic backgrounds. © 2016 Royal College of Speech and Language Therapists.

  8. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    PubMed Central

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  9. System in Black Language. Multilingual Matters Series: 77.

    ERIC Educational Resources Information Center

    Sutcliffe, David; Figueroa, John

    An examination of pattern in certain languages spoken primarily by Blacks has both a narrow and a broad focus. The former is on structure and development of the creole spoken by Jamaicans in England and to a lesser extent, a Black country English. The broader focus is on the relationship between the Kwa languages of West Africa and the…

  10. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    ERIC Educational Resources Information Center

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  11. Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?

    ERIC Educational Resources Information Center

    Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.

    2013-01-01

    Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…

  12. Spoken Language Comprehension of Phrases, Simple and Compound-Active Sentences in Non-Speaking Children with Severe Cerebral Palsy

    ERIC Educational Resources Information Center

    Geytenbeek, Joke J. M.; Heim, Margriet J. M.; Knol, Dirk L.; Vermeulen, R. Jeroen; Oostrom, Kim J.

    2015-01-01

    Background Children with severe cerebral palsy (CP) (i.e. "non-speaking children with severely limited mobility") are restricted in many domains that are important to the acquisition of language. Aims To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. Methods & Procedures…

  13. Examining the Concept of Subordination in Spoken L1 and L2 English: The Case of "If"-Clauses

    ERIC Educational Resources Information Center

    Basterrechea, María; Weinert, Regina

    2017-01-01

    This article explores the applications of research on native spoken language into second language learning in the concept of subordination. Second language (L2) learners' ability to integrate subordinate clauses is considered an indication of higher proficiency (e.g., Ellis & Barkhuizen, 2005; Tarone & Swierzbin, 2009). However, the notion…

  14. Action and object word writing in a case of bilingual aphasia.

    PubMed

    Kambanaros, Maria; Messinis, Lambros; Anyfantis, Emmanouil

    2012-01-01

    We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.

  15. "We communicated that way for a reason": language practices and language ideologies among hearing adults whose parents are deaf.

    PubMed

    Pizer, Ginger; Walters, Keith; Meier, Richard P

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."

  16. Production Is Only Half the Story - First Words in Two East African Languages.

    PubMed

    Alcock, Katherine J

    2017-01-01

    Theories of early learning of nouns in children's vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8-20 months were interviewed using Communicative Development Inventories that assess infants' first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75-95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children's spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children's comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language.

  17. Spoken language skills and educational placement in Finnish children with cochlear implants.

    PubMed

    Lonka, Eila; Hasan, Marja; Komulainen, Erkki

    2011-01-01

    This study reports the demographics, and the auditory and spoken language development as well as educational settings, for a total of 164 Finnish children with cochlear implants. Two questionnaires were employed: the first, concerning day care and educational placement, was filled in by professionals for rehabilitation guidance, and the second, evaluating language development (categories of auditory performance, spoken language skills, and main mode of communication), by speech and language therapists in audiology departments. Nearly half of the children were enrolled in normal kindergartens and 43% of school-aged children in mainstream schools. Categories of auditory performance were observed to grow in relation to age at cochlear implantation (p < 0.001) as well as in relation to proportional hearing age (p < 0.001). The composite scores for language development moved to more diversified ones in relation to increasing age at cochlear implantation and proportional hearing age (p < 0.001). Children without additional disorders outperformed those with additional disorders. The results indicate that the most favorable age for cochlear implantation could be earlier than 2. Compared to other children, spoken language evaluation scores of those with additional disabilities were significantly lower; however, these children showed gradual improvements in their auditory perception and language scores. Copyright © 2011 S. Karger AG, Basel.

  18. Production Is Only Half the Story — First Words in Two East African Languages

    PubMed Central

    Alcock, Katherine J.

    2017-01-01

    Theories of early learning of nouns in children’s vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8–20 months were interviewed using Communicative Development Inventories that assess infants’ first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75–95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children’s spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children’s comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language. PMID:29163280

  19. Teaching the Spoken Language.

    ERIC Educational Resources Information Center

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  20. Spoken word recognition by Latino children learning Spanish as their first language*

    PubMed Central

    HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE

    2010-01-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157

  1. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children With Autism Spectrum Disorders

    PubMed Central

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2010-01-01

    Purpose The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child’s language level in order to establish a framework for comparing outcomes across intervention studies. Method The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. Results The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children’s spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age. PMID:19380608

  2. Glossary of Terms Relating to Languages of the Middle East.

    ERIC Educational Resources Information Center

    Ferguson, Charles A.

    This glossary gives brief, non-technical explanations of the following kinds of terms: (1) names of all important languages now spoken in the Middle East, or known to have been spoken in the area; (2) names of language families represented in the area; (3) descriptive terms used with reference to the writing systems of the area; (4) names of…

  3. The Lightening Veil: Language Revitalization in Wales

    ERIC Educational Resources Information Center

    Williams, Colin H.

    2014-01-01

    The Welsh language, which is indigenous to Wales, is one of six Celtic languages. It is spoken by 562,000 speakers, 19% of the population of Wales, according to the 2011 U.K. Census, and it is estimated that it is spoken by a further 200,000 residents elsewhere in the United Kingdom. No exact figures exist for the undoubted thousands of other…

  4. The Use of Multimedia and the Arts in Language Revitalization, Maintenance, and Development: The Case of the Balsas Nahuas of Guerrero, Mexico.

    ERIC Educational Resources Information Center

    Farfan, Jose Antonio Flores

    Even though Nahuatl is the most widely spoken indigenous language in Mexico, it is endangered. Threats include poor support for Nahuatl-speaking communities, migration of Nahuatl speakers to cities where English and Spanish are spoken, prejudicial attitudes toward indigenous languages, lack of contact between small communities of different…

  5. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    ERIC Educational Resources Information Center

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  6. The Functional Organisation of the Fronto-Temporal Language System: Evidence from Syntactic and Semantic Ambiguity

    ERIC Educational Resources Information Center

    Rodd, Jennifer M.; Longe, Olivia A.; Randall, Billi; Tyler, Lorraine K.

    2010-01-01

    Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities…

  7. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    ERIC Educational Resources Information Center

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  8. Notes from the Field: Lolak--Another Moribund Language of Indonesia, with Supporting Audio

    ERIC Educational Resources Information Center

    Lobel, Jason William; Paputungan, Ade Tatak

    2017-01-01

    This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…

  9. Divergence of fine and gross motor skills in prelingually deaf children: implications for cochlear implantation.

    PubMed

    Horn, David L; Pisoni, David B; Miyamoto, Richard T

    2006-08-01

    The objective of this study was to assess relations between fine and gross motor development and spoken language processing skills in pediatric cochlear implant users. The authors conducted a retrospective analysis of longitudinal data. Prelingually deaf children who received a cochlear implant before age 5 and had no known developmental delay or cognitive impairment were included in the study. Fine and gross motor development were assessed before implantation using the Vineland Adaptive Behavioral Scales, a standardized parental report of adaptive behavior. Fine and gross motor scores reflected a given child's motor functioning with respect to a normative sample of typically developing, normal-hearing children. Relations between these preimplant scores and postimplant spoken language outcomes were assessed. In general, gross motor scores were found to be positively related to chronologic age, whereas the opposite trend was observed for fine motor scores. Fine motor scores were more strongly correlated with postimplant expressive and receptive language scores than gross motor scores. Our findings suggest a disassociation between fine and gross motor development in prelingually deaf children: fine motor skills, in contrast to gross motor skills, tend to be delayed as the prelingually deaf children get older. These findings provide new knowledge about the links between motor and spoken language development and suggest that auditory deprivation may lead to atypical development of certain motor and language skills that share common cortical processing resources.

  10. Effects of Prosody While Disambiguating Ambiguous Japanese Sentences in the Brain of Native Speakers and Learners of Japanese: A Proposition for Pronunciation and Prosody Training

    ERIC Educational Resources Information Center

    Naito-Billen, Yuka

    2012-01-01

    Recently, the significant role that pronunciation and prosody plays in processing spoken language has been widely recognized and a variety of teaching methodologies of pronunciation/prosody has been implemented in teaching foreign languages. Thus, an analysis of how similarly or differently native and L2 learners of a language use…

  11. La mort d'une langue: le judeo-espagnol (The Death of a Language: The Spanish Spoken by Jews)

    ERIC Educational Resources Information Center

    Renard, Raymond

    1971-01-01

    Describes the Sephardic culture which flourished in the Balkans, Ottoman Empire, and North Africa during the Middle Ages. Suggests the use of Ladino", the language of medieval Spain spoken by the expelled Jews. (DS)

  12. Applications of Text Analysis Tools for Spoken Response Grading

    ERIC Educational Resources Information Center

    Crossley, Scott; McNamara, Danielle

    2013-01-01

    This study explores the potential for automated indices related to speech delivery, language use, and topic development to model human judgments of TOEFL speaking proficiency in second language (L2) speech samples. For this study, 244 transcribed TOEFL speech samples taken from 244 L2 learners were analyzed using automated indices taken from…

  13. Speech perception and spoken word recognition: past and present.

    PubMed

    Jusezyk, Peter W; Luce, Paul A

    2002-02-01

    The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.

  14. The role of voice input for human-machine communication.

    PubMed Central

    Cohen, P R; Oviatt, S L

    1995-01-01

    Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803

  15. TEACHING DEAF CHILDREN TO TALK.

    ERIC Educational Resources Information Center

    EWING, ALEXANDER; EWING, ETHEL C.

    DESIGNED AS A TEXT FOR AUDIOLOGISTS AND TEACHERS OF HEARING IMPAIRED CHILDREN, THIS BOOK PRESENTS BASIC INFORMATION ABOUT SPOKEN LANGUAGE, HEARING, AND LIPREADING. METHODS AND RESULTS OF EVALUATING SPOKEN LANGUAGE OF AURALLY HANDICAPPED CHILDREN WITHOUT USING READING OR WRITING ARE REPORTED. VARIOUS TYPES OF INDIVIDUAL AND GROUP HEARING AIDS ARE…

  16. Do Different Modalities of Reflection Matter? An Exploration of Adult Second-Language Learners' Reported Strategy Use and Oral Language Production

    ERIC Educational Resources Information Center

    Huang, Li-Shih

    2010-01-01

    This paper reports on a small-scale study that was the first to explore raising second-language (L2) learners' awareness of speaking strategies as mediated by three modalities of task-specific reflection--individual written reflection, individual spoken reflection, and group spoken reflection. Though research in such areas as L2 writing, teacher's…

  17. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2009-01-01

    Purpose: The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken…

  18. The Effect of Collocation on Meaning Representation of Adjectives Such as Big and Large in Translation from Two Languages Used in the Article to English Language Texts

    ERIC Educational Resources Information Center

    Minaabad, Malahat Shabani

    2011-01-01

    Translation is the process to transfer written or spoken source language (SL) texts to equivalent written or spoken target language (TL) texts. Translation studies (TS) relies so heavily on a concept of meaning, that one may claim that there is no TS without any reference to meanings. People's understanding of the meaning of sentences is far more…

  19. Language spoken at home and the association between ethnicity and doctor–patient communication in primary care: analysis of survey data for South Asian and White British patients

    PubMed Central

    Brodie, Kara; Abel, Gary

    2016-01-01

    Objectives To investigate if language spoken at home mediates the relationship between ethnicity and doctor–patient communication for South Asian and White British patients. Methods We conducted secondary analysis of patient experience survey data collected from 5870 patients across 25 English general practices. Mixed effect linear regression estimated the difference in composite general practitioner–patient communication scores between White British and South Asian patients, controlling for practice, patient demographics and patient language. Results There was strong evidence of an association between doctor–patient communication scores and ethnicity. South Asian patients reported scores averaging 3.0 percentage points lower (scale of 0–100) than White British patients (95% CI −4.9 to −1.1, p=0.002). This difference reduced to 1.4 points (95% CI −3.1 to 0.4) after accounting for speaking a non-English language at home; respondents who spoke a non-English language at home reported lower scores than English-speakers (adjusted difference 3.3 points, 95% CI −6.4 to −0.2). Conclusions South Asian patients rate communication lower than White British patients within the same practices and with similar demographics. Our analysis further shows that this disparity is largely mediated by language. PMID:26940108

  20. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  1. Linguistics and Literacy.

    ERIC Educational Resources Information Center

    Kindell, Gloria

    1983-01-01

    Discusses four general areas of linguistics studies that are particularly relevant to literacy issues: (1) discourse analysis, including text analysis, spoken and written language, and home and school discourse; (2) relationships between speech and writing, the distance between dialects and written norms, and developmental writing; (3)…

  2. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    PubMed

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  3. Infant perceptual development for faces and spoken words: An integrated approach

    PubMed Central

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-01-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626

  4. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    ERIC Educational Resources Information Center

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  5. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  6. Preference for language in early infancy: the human language bias is not speech specific.

    PubMed

    Krentz, Ursula C; Corina, David P

    2008-01-01

    Fundamental to infants' acquisition of their native language is an inherent interest in the language spoken around them over non-linguistic environmental sounds. The following studies explored whether the bias for linguistic signals in hearing infants is specific to speech, or reflects a general bias for all human language, spoken and signed. Results indicate that 6-month-old infants prefer an unfamiliar, visual-gestural language (American Sign Language) over non-linguistic pantomime, but 10-month-olds do not. These data provide evidence against a speech-specific bias in early infancy and provide insights into those properties of human languages that may underlie this language-general attentional bias.

  7. Brain Bases of Morphological Processing in Young Children

    PubMed Central

    Arredondo, Maria M.; Ip, Ka I; Hsu, Lucy Shih-Ju; Tardif, Twila; Kovelman, Ioulia

    2017-01-01

    How does the developing brain support the transition from spoken language to print? Two spoken language abilities form the initial base of child literacy across languages: knowledge of language sounds (phonology) and knowledge of the smallest units that carry meaning (morphology). While phonology has received much attention from the field, the brain mechanisms that support morphological competence for learning to read remain largely unknown. In the present study, young English-speaking children completed an auditory morphological awareness task behaviorally (n = 69, ages 6–12) and in fMRI (n = 16). The data revealed two findings: First, children with better morphological abilities showed greater activation in left temporo-parietal regions previously thought to be important for supporting phonological reading skills, suggesting that this region supports multiple language abilities for successful reading acquisition. Second, children showed activation in left frontal regions previously found active in young Chinese readers, suggesting morphological processes for reading acquisition might be similar across languages. These findings offer new insights for developing a comprehensive model of how spoken language abilities support children’s reading acquisition across languages. PMID:25930011

  8. Social scale and structural complexity in human languages.

    PubMed

    Nettle, Daniel

    2012-07-05

    The complexity of different components of the grammars of human languages can be quantified. For example, languages vary greatly in the size of their phonological inventories, and in the degree to which they make use of inflectional morphology. Recent studies have shown that there are relationships between these types of grammatical complexity and the number of speakers a language has. Languages spoken by large populations have been found to have larger phonological inventories, but simpler morphology, than languages spoken by small populations. The results require further investigation, and, most importantly, the mechanism whereby the social context of learning and use affects the grammatical evolution of a language needs elucidation.

  9. E-cigarette use and disparities by race, citizenship status and language among adolescents.

    PubMed

    Alcalá, Héctor E; Albert, Stephanie L; Ortega, Alexander N

    2016-06-01

    E-cigarette use among adolescents is on the rise in the U.S. However, limited attention has been given to examining the role of race, citizenship status and language spoken at home in shaping e-cigarette use behavior. Data are from the 2014 Adolescent California Health Interview Survey, which interviewed 1052 adolescents ages 12-17. Lifetime e-cigarette use was examined by sociodemographic characteristics. Separate logistic regression models predicted odds of ever-smoking e-cigarettes from race, citizenship status and language spoken at home. Sociodemographic characteristics were then added to these models as control variables and a model with all three predictors and controls was run. Similar models were run with conventional smoking as an outcome. 10.3% of adolescents ever used e-cigarettes. E-cigarette use was higher among ever-smokers of conventional cigarettes, individuals above 200% of the Federal Poverty Level, US citizens and those who spoke English-only at home. Multivariate analyses demonstrated that citizenship status and language spoken at home were associated with lifetime e-cigarette use, after accounting for control variables. Only citizenship status was associated with e-cigarette use, when controls variables race and language spoken at home were all in the same model. Ever use of e-cigarettes in this study was higher than previously reported national estimates. Action is needed to curb the use of e-cigarettes among adolescents. Differences in lifetime e-cigarette use by citizenship status and language spoken at home suggest that less acculturated individuals use e-cigarettes at lower rates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    PubMed

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. A geographical analysis of speech-language pathology services to support multilingual children.

    PubMed

    Verdon, Sarah; McLeod, Sharynne; McDonald, Simon

    2014-06-01

    The speech-language pathology workforce strives to provide equitable, quality services to multilingual people. However, the extent to which this is being achieved is unknown. Participants in this study were 2849 members of Speech Pathology Australia and 4386 children in the Birth cohort of the Longitudinal Study of Australian Children (LSAC). Statistical and geospatial analyses were undertaken to identify the linguistic diversity and geographical distribution of Australian speech-language pathology services and Australian children. One fifth of services offered by Speech Pathology Australia members (20.2%) were available in a language other than English. Services were most commonly offered in Australian Sign Language (Auslan) (4.3%), French (3.1%), Italian (2.2%), Greek (1.6%), and Cantonese (1.5%). Among 4-5-year-old children in the nationally representative LSAC, 15.3% regularly spoke and/or understood a language other than English. The most common languages spoken by the children were Arabic (1.5%), Italian (1.2%), Greek (0.9%), Spanish (0.9%), and Vietnamese (0.9%). There was a mismatch between the location of and languages in which multilingual services were offered, and the location of and languages spoken by children. These findings highlight the need for SLPs to be culturally competent in providing equitable services to all clients, regardless of the languages they speak.

  12. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    PubMed

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  13. Variation in Discourse Strategies in a Multilingual Context

    ERIC Educational Resources Information Center

    Bai, B. Lakshmi

    2010-01-01

    This paper is an attempt to study empirically a sample of spoken narratives of Hindi, Telugu and Dakkhini speakers in the multilingual setting of Hyderabad. After a brief account of multilingualism and variation within a language as commonly occurring phenomena, the paper examines the spoken narratives of the three languages mentioned above with a…

  14. Spoken Grammar Practice and Feedback in an ASR-Based CALL System

    ERIC Educational Resources Information Center

    de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland

    2015-01-01

    Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…

  15. Does It Really Matter whether Students' Contributions Are Spoken versus Typed in an Intelligent Tutoring System with Natural Language?

    ERIC Educational Resources Information Center

    D'Mello, Sidney K.; Dowell, Nia; Graesser, Arthur

    2011-01-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning,…

  16. On-Line Syntax: Thoughts on the Temporality of Spoken Language

    ERIC Educational Resources Information Center

    Auer, Peter

    2009-01-01

    One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…

  17. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    ERIC Educational Resources Information Center

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  18. Comparing Spoken Language Treatments for Minimally Verbal Preschoolers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-01-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…

  19. Influences of Indigenous Language on Spatial Frames of Reference in Aboriginal English

    ERIC Educational Resources Information Center

    Edmonds-Wathen, Cris

    2014-01-01

    The Aboriginal English spoken by Indigenous children in remote communities in the Northern Territory of Australia is influenced by the home languages spoken by themselves and their families. This affects uses of spatial terms used in mathematics such as "in front" and "behind." Speakers of the endangered Indigenous Australian…

  20. "Now We Have Spoken."

    ERIC Educational Resources Information Center

    Zimmer, Patricia Moore

    2001-01-01

    Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…

  1. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    ERIC Educational Resources Information Center

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  2. A Grammar of Karbi

    ERIC Educational Resources Information Center

    Konnerth, Linda Anna

    2014-01-01

    Karbi is a Tibeto-Burman (TB) language spoken by half a million people in the Karbi Anglong district in Assam, Northeast India, and surrounding areas in the extended Brahmaputra Valley area. It is an agglutinating, verb-final language. This dissertation offers a description of the dialect spoken in the hills of the Karbi Anglong district. It is…

  3. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  4. Relationship between affect and achievement in science and mathematics in Malaysia and Singapore

    NASA Astrophysics Data System (ADS)

    Thoe Ng, Khar; Fah Lay, Yoon; Areepattamannil, Shaljan; Treagust, David F.; Chandrasegaran, A. L.

    2012-11-01

    Background : The Trends in International Mathematics and Science Study (TIMSS) assesses the quality of the teaching and learning of science and mathematics among Grades 4 and 8 students across participating countries. Purpose : This study explored the relationship between positive affect towards science and mathematics and achievement in science and mathematics among Malaysian and Singaporean Grade 8 students. Sample : In total, 4466 Malaysia students and 4599 Singaporean students from Grade 8 who participated in TIMSS 2007 were involved in this study. Design and method : Students' achievement scores on eight items in the survey instrument that were reported in TIMSS 2007 were used as the dependent variable in the analysis. Students' scores on four items in the TIMSS 2007 survey instrument pertaining to students' affect towards science and mathematics together with students' gender, language spoken at home and parental education were used as the independent variables. Results : Positive affect towards science and mathematics indicated statistically significant predictive effects on achievement in the two subjects for both Malaysian and Singaporean Grade 8 students. There were statistically significant predictive effects on mathematics achievement for the students' gender, language spoken at home and parental education for both Malaysian and Singaporean students, with R 2 = 0.18 and 0.21, respectively. However, only parental education showed statistically significant predictive effects on science achievement for both countries. For Singapore, language spoken at home also demonstrated statistically significant predictive effects on science achievement, whereas gender did not. For Malaysia, neither gender nor language spoken at home had statistically significant predictive effects on science achievement. Conclusions : It is important for educators to consider implementing self-concept enhancement intervention programmes by incorporating 'affect' components of academic self-concept in order to develop students' talents and promote academic excellence in science and mathematics.

  5. An Eye Tracking Study on the Perception and Comprehension of Unimodal and Bimodal Linguistic Inputs by Deaf Adolescents

    PubMed Central

    Mastrantuono, Eliana; Saldaña, David; Rodríguez-Ortiz, Isabel R.

    2017-01-01

    An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf (n = 25) and hearing (n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals – particularly native signers – mainly perceived signs through peripheral vision. PMID:28680416

  6. An Eye Tracking Study on the Perception and Comprehension of Unimodal and Bimodal Linguistic Inputs by Deaf Adolescents.

    PubMed

    Mastrantuono, Eliana; Saldaña, David; Rodríguez-Ortiz, Isabel R

    2017-01-01

    An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf ( n = 25) and hearing ( n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals - particularly native signers - mainly perceived signs through peripheral vision.

  7. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic.

    PubMed

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children's phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children's early morphological awareness in SpA explained variance in children's gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  8. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    PubMed Central

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633

  9. Use of spoken and written Japanese did not protect Japanese-American men from cognitive decline in late life.

    PubMed

    Crane, Paul K; Gruhl, Jonathan C; Erosheva, Elena A; Gibbons, Laura E; McCurry, Susan M; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-11-01

    Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900-1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve.

  10. Use of Spoken and Written Japanese Did Not Protect Japanese-American Men From Cognitive Decline in Late Life

    PubMed Central

    Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-01-01

    Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282

  11. Lévy-like diffusion in eye movements during spoken-language comprehension.

    PubMed

    Stephen, Damian G; Mirman, Daniel; Magnuson, James S; Dixon, James A

    2009-05-01

    This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.

  12. Lévy-like diffusion in eye movements during spoken-language comprehension

    NASA Astrophysics Data System (ADS)

    Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.

    2009-05-01

    This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.

  13. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  14. Young children's communication and literacy: a qualitative study of language in the inclusive preschool.

    PubMed

    Kliewer, C

    1995-06-01

    Interactive and literacy-based language use of young children within the context of an inclusive preschool classroom was explored. An interpretivist framework and qualitative research methods, including participant observation, were used to examine and analyze language in five preschool classes that were composed of children with and without disabilities. Children's language use included spoken, written, signed, and typed. Results showed complex communicative and literacy language use on the part of young children outside conventional adult perspectives. Also, children who used expressive methods other than speech were often left out of the contexts where spoken language was richest and most complex.

  15. Language and reading development in the brain today: neuromarkers and the case for prediction.

    PubMed

    Buchweitz, Augusto

    2016-01-01

    The goal of this article is to provide an account of language development in the brain using the new information about brain function gleaned from cognitive neuroscience. This account goes beyond describing the association between language and specific brain areas to advocate the possibility of predicting language outcomes using brain-imaging data. The goal is to address the current evidence about language development in the brain and prediction of language outcomes. Recent studies will be discussed in the light of the evidence generated for predicting language outcomes and using new methods of analysis of brain data. The present account of brain behavior will address: (1) the development of a hardwired brain circuit for spoken language; (2) the neural adaptation that follows reading instruction and fosters the "grafting" of visual processing areas of the brain onto the hardwired circuit of spoken language; and (3) the prediction of language development and the possibility of translational neuroscience. Brain imaging has allowed for the identification of neural indices (neuromarkers) that reflect typical and atypical language development; the possibility of predicting risk for language disorders has emerged. A mandate to develop a bridge between neuroscience and health and cognition-related outcomes may pave the way for translational neuroscience. Copyright © 2016 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  16. Using Unscripted Spoken Texts in the Teaching of Second Language Listening

    ERIC Educational Resources Information Center

    Wagner, Elvis

    2014-01-01

    Most spoken texts that are used in second language (L2) listening classroom activities are scripted texts, where the text is written, revised, polished, and then read aloud with artificially clear enunciation and slow rate of speech. This article explores the field's overreliance on these scripted texts, at the expense of including unscripted…

  17. Revisiting Debates on Oracy: Classroom Talk--Moving towards a Democratic Pedagogy?

    ERIC Educational Resources Information Center

    Coultas, Valerie

    2015-01-01

    This article uses documentary evidence to review debates on spoken language and learning in the UK over recent decades. It argues that two different models of talk have been at stake: one that wishes to "correct" children's spoken language and another than encourages children to use talk to learn and represent their worlds. The article…

  18. The Contribution of the Inferior Parietal Cortex to Spoken Language Production

    ERIC Educational Resources Information Center

    Geranmayeh, Fatemeh; Brownsett, Sonia L. E.; Leech, Robert; Beckmann, Christian F.; Woodhead, Zoe; Wise, Richard J. S.

    2012-01-01

    This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with…

  19. Hearing as Touch in a Multilingual Film Interview: The Interviewer's Linguistic Incompetence as Aesthetic Key Moment

    ERIC Educational Resources Information Center

    Frimberger, Katja

    2016-01-01

    This article explores the author's embodied experience of linguistic incompetence in the context of an interview-based, short, promotional film production about people's personal connections to their spoken languages in Glasgow, Scotland/UK. The article highlights that people's right to their spoken languages during film interviews and the…

  20. Professional Training in Listening and Spoken Language--A Canadian Perspective

    ERIC Educational Resources Information Center

    Fitzpatrick, Elizabeth

    2010-01-01

    Several factors undoubtedly influenced the development of listening and spoken language options for children with hearing loss in Canada. The concept of providing auditory-based rehabilitation was popularized in Canada in the 1960s through the work of Drs. Daniel Ling and Agnes Ling in Montreal. The Lings founded the McGill University Project for…

  1. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    ERIC Educational Resources Information Center

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  2. A Race to Rescue Native Tongues

    ERIC Educational Resources Information Center

    Ashburn, Elyse

    2007-01-01

    Of the 300 or so native languages once spoken in North America, only about 150 are still spoken--and the majority of those have just a handful of mostly elderly speakers. For most Native American languages, colleges and universities are their last great hope, if not their final resting place. People at a number of institutions across the country…

  3. Guidelines for Evaluating Auditory-Oral Programs for Children Who Are Hearing Impaired.

    ERIC Educational Resources Information Center

    Alexander Graham Bell Association for the Deaf, Inc., Washington, DC.

    These guidelines are intended to assist parents in evaluating educational programs for children who are hearing impaired, where a program's stated intention is promoting the child's optimal use of spoken language as a mode of everyday communication and learning. The guidelines are applicable to programs where spoken language is the sole mode or…

  4. Beyond Rhyme or Reason: ERPs Reveal Task-Specific Activation of Orthography on Spoken Language

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Ziegler, Johannes C.

    2011-01-01

    Metaphonological tasks, such as rhyme judgment, have been the primary tool for the investigation of the effects of orthographic knowledge on spoken language. However, it has been recently argued that the orthography effect in rhyme judgment does not reflect the automatic activation of orthographic codes but rather stems from sophisticated response…

  5. Effects of Tasks on Spoken Interaction and Motivation in English Language Learners

    ERIC Educational Resources Information Center

    Carrero Pérez, Nubia Patricia

    2016-01-01

    Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…

  6. Self-Ratings of Spoken Language Dominance: A Multilingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2012-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…

  7. The Academic Spoken Word List

    ERIC Educational Resources Information Center

    Dang, Thi Ngoc Yen; Coxhead, Averil; Webb, Stuart

    2017-01-01

    The linguistic features of academic spoken English are different from those of academic written English. Therefore, for this study, an Academic Spoken Word List (ASWL) was developed and validated to help second language (L2) learners enhance their comprehension of academic speech in English-medium universities. The ASWL contains 1,741 word…

  8. Comprehension of spoken language in non-speaking children with severe cerebral palsy: an explorative study on associations with motor type and disabilities.

    PubMed

    Geytenbeek, Joke J M; Vermeulen, R Jeroen; Becher, Jules G; Oostrom, Kim J

    2015-03-01

    To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic (46%) CP (Gross Motor Function Classification System [GMFCS] levels IV [39%] and V [61%]) underwent spoken language comprehension assessment with the computer-based instrument for low motor language testing (C-BiLLT), a new and validated diagnostic instrument. A multiple linear regression model was used to investigate which variables explained the variation in C-BiLLT scores. Associations between spoken language comprehension abilities (expressed in z-score or age-equivalent score) and motor type of CP, GMFCS and Manual Ability Classification System (MACS) levels, gestational age, and epilepsy were analysed with Fisher's exact test. A p-value <0.05 was considered statistically significant. Chronological age, motor type, and GMFCS classification explained 33% (R=0.577, R(2) =0.33) of the variance in spoken language comprehension. Of the children aged younger than 6 years 6 months, 52.4% of the children with dyskinetic CP attained comprehension scores within the average range (z-score ≥-1.6) as opposed to none of the children with spastic CP. Of the children aged older than 6 years 6 months, 32% of the children with dyskinetic CP reached the highest achievable age-equivalent score compared to 4% of the children with spastic CP. No significant difference in disability was found between CP-related variables (MACS levels, gestational age, epilepsy), with the exception of GMFCS which showed a significant difference in children aged younger than 6 years 6 months (p=0.043). Despite communication disabilities in children with severe CP, particularly in dyskinetic CP, spoken language comprehension may show no or only moderate delay. These findings emphasize the importance of introducing alternative and/or augmentative communication devices from early childhood. © 2014 Mac Keith Press.

  9. Low self-concept in poor readers: prevalence, heterogeneity, and risk.

    PubMed

    McArthur, Genevieve; Castles, Anne; Kohnen, Saskia; Banales, Erin

    2016-01-01

    There is evidence that poor readers are at increased risk for various types of low self-concept-particularly academic self-concept. However, this evidence ignores the heterogeneous nature of poor readers, and hence the likelihood that not all poor readers have low self-concept. The aim of this study was to better understand which types of poor readers have low self-concept. We tested 77 children with poor reading for their age for four types of self-concept, four types of reading, three types of spoken language, and two types of attention. We found that poor readers with poor attention had low academic self-concept, while poor readers with poor spoken language had low general self-concept in addition to low academic self-concept. In contrast, poor readers with typical spoken language and attention did not have low self-concept of any type. We also discovered that academic self-concept was reliably associated with reading and receptive spoken vocabulary, and that general self-concept was reliably associated with spoken vocabulary. These outcomes suggest that poor readers with multiple impairments in reading, language, and attention are at higher risk for low academic and general self-concept, and hence need to be assessed for self-concept in clinical practice. Our results also highlight the need for further investigation into the heterogeneous nature of self-concept in poor readers.

  10. Low self-concept in poor readers: prevalence, heterogeneity, and risk

    PubMed Central

    Castles, Anne; Kohnen, Saskia; Banales, Erin

    2016-01-01

    There is evidence that poor readers are at increased risk for various types of low self-concept—particularly academic self-concept. However, this evidence ignores the heterogeneous nature of poor readers, and hence the likelihood that not all poor readers have low self-concept. The aim of this study was to better understand which types of poor readers have low self-concept. We tested 77 children with poor reading for their age for four types of self-concept, four types of reading, three types of spoken language, and two types of attention. We found that poor readers with poor attention had low academic self-concept, while poor readers with poor spoken language had low general self-concept in addition to low academic self-concept. In contrast, poor readers with typical spoken language and attention did not have low self-concept of any type. We also discovered that academic self-concept was reliably associated with reading and receptive spoken vocabulary, and that general self-concept was reliably associated with spoken vocabulary. These outcomes suggest that poor readers with multiple impairments in reading, language, and attention are at higher risk for low academic and general self-concept, and hence need to be assessed for self-concept in clinical practice. Our results also highlight the need for further investigation into the heterogeneous nature of self-concept in poor readers. PMID:27867764

  11. Will They Catch Up? The Role of Age at Cochlear Implantation in the Spoken Language Development of Children with Severe to Profound Hearing Loss

    ERIC Educational Resources Information Center

    Nicholas, Johanna Grant; Geers, Ann E.

    2007-01-01

    Purpose: The authors examined the benefits of younger cochlear implantation, longer cochlear implant use, and greater pre-implant aided hearing to spoken language at 3.5 and 4.5 years of age. Method: Language samples were obtained at ages 3.5 and 4.5 years from 76 children who received an implant by their 3rd birthday. Hierarchical linear modeling…

  12. Selective auditory attention in adults: effects of rhythmic structure of the competing language.

    PubMed

    Reel, Leigh Ann; Hicks, Candace Bourland

    2012-02-01

    The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Reception thresholds for English sentences were measured for 50 monolingual English-speaking adults in conditions with 2-talker (male-female) competing speech spoken in a stress-based (English, German), syllable-based (Spanish, French), or mora-based (Japanese) language. Two different masking signals were created for each language (i.e., 2 different 2-talker pairs). All subjects were tested in 10 competing conditions (2 conditions for each of the 5 languages). A significant difference was noted between the 2 masking signals within each language. Across languages, significantly greater listening difficulty was observed in conditions where competing speech was spoken in English, German, or Japanese, as compared with Spanish or French. Results suggest that (a) for a particular language, masking effectiveness can vary between different male-female 2-talker maskers and (b) for stress-based vs. syllable-based languages, competing speech is more difficult to ignore when spoken in a language from the native rhythmic class as compared with a nonnative rhythmic class, regardless of whether the language is familiar or unfamiliar to the listener.

  13. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    PubMed

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  14. Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech

    NASA Astrophysics Data System (ADS)

    Furui, Sadaoki

    This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.

  15. The cultural and linguistic diversity of 3-year-old children with hearing loss.

    PubMed

    Crowe, Kathryn; McLeod, Sharynne; Ching, Teresa Y C

    2012-01-01

    Understanding the cultural and linguistic diversity of young children with hearing loss informs the provision of assessment, habilitation, and education services to both children and their families. Data describing communication mode, oral language use, and demographic characteristics were collected for 406 children with hearing loss and their caregivers when children were 3 years old. The data were from the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study, a prospective, population-based study of children with hearing loss in Australia. The majority of the 406 children used spoken English at home; however, 28 other languages also were spoken. Compared with their caregivers, the children in this study used fewer spoken languages and had higher rates of oral monolingualism. Few children used a spoken language other than English in their early education environment. One quarter of the children used sign to communicate at home and/or in their early education environment. No associations between caregiver hearing status and children's communication mode were identified. This exploratory investigation of the communication modes and languages used by young children with hearing loss and their caregivers provides an initial examination of the cultural and linguistic diversity and heritage language attrition of this population. The findings of this study have implications for the development of resources and the provision of early education services to the families of children with hearing loss, especially where the caregivers use a language that is not the lingua franca of their country of residence.

  16. The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing

    PubMed Central

    Gow, David W.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237

  17. Is Language a Barrier to the Use of Preventive Services?

    PubMed Central

    Woloshin, Steven; Schwartz, Lisa M; Katz, Steven J; Welch, H Gilbert

    1997-01-01

    OBJECTIVE To isolate the effect of spoken language from financial barriers to care, we examined the relation of language to use of preventive services in a system with universal access. DESIGN Cross-sectional survey. SETTING Household population of women living in Ontario, Canada, in 1990. PARTICIPANTS Subjects were 22,448 women completing the 1990 Ontario Health Survey, a population-based random sample of households. MEASUREMENTS AND MAIN RESULTS We defined language as the language spoken in the home and assessed self-reported receipt of breast examination, mammogram and Pap testing. We used logistic regression to calculate odds ratios for each service adjusting for potential sources of confounding: socioeconomic characteristics, contact with the health care system, and measures reflecting culture. Ten percent of the women spoke a non-English language at home (4% French, 6% other). After adjustment, compared with English speakers, French-speaking women were significantly less likely to receive breast exams or mammography, and other language speakers were less likely to receive Pap testing. CONCLUSIONS Women whose main spoken language was not English were less likely to receive important preventive services. Improving communication with patients with limited English may enhance participation in screening programs. PMID:9276652

  18. Mental Disorders in Deaf and Hard of Hearing Adult Outpatients: A Comparison of Linguistic Subgroups.

    PubMed

    Øhre, Beate; Volden, Maj; Falkum, Erik; von Tetzchner, Stephen

    2017-01-01

    Deaf and hard of hearing (DHH) individuals who use signed language and those who use spoken language face different challenges and stressors. Accordingly, the profile of their mental problems may also differ. However, studies of mental disorders in this population have seldom differentiated between linguistic groups. Our study compares demographics, mental disorders, and levels of distress and functioning in 40 patients using Norwegian Sign Language (NSL) and 36 patients using spoken language. Assessment instruments were translated into NSL. More signers were deaf than hard of hearing, did not share a common language with their childhood caregivers, and had attended schools for DHH children. More Norwegian-speaking than signing patients reported medical comorbidity, whereas the distribution of mental disorders, symptoms of anxiety and depression, and daily functioning did not differ significantly. Somatic complaints and greater perceived social isolation indicate higher stress levels in DHH patients using spoken language than in those using sign language. Therefore, preventive interventions are necessary, as well as larger epidemiological and clinical studies concerning the mental health of all language groups within the DHH population. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Phonological memory in sign language relies on the visuomotor neural system outside the left hemisphere language network.

    PubMed

    Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi

    2017-01-01

    Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to the classical left-hemisphere language network.

  20. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension

    PubMed Central

    Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  1. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    PubMed

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  2. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    ERIC Educational Resources Information Center

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  3. Speech, Sign, or Multilingualism for Children with Hearing Loss: Quantitative Insights into Caregivers' Decision Making

    ERIC Educational Resources Information Center

    Crowe, Kathryn; McLeod, Sharynne; McKinnon, David H.; Ching, Teresa Y. C.

    2014-01-01

    Purpose: The authors sought to investigate the influence of a comprehensive range of factors on the decision making of caregivers of children with hearing loss regarding the use of speech, the use of sign, spoken language multilingualism, and spoken language choice. This is a companion article to the qualitative investigation described in Crowe,…

  4. Why Oracy Must Be in the Curriculum (and Group Work in the Classroom)

    ERIC Educational Resources Information Center

    Mercer, Neil

    2015-01-01

    In this article it is argued that the development of young people's skills in using spoken language should be given more time and attention in the school curriculum. The author discusses the importance of the effective use of spoken language in educational and work settings, considers what research has told us about the factors that make group…

  5. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    ERIC Educational Resources Information Center

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  6. Accelerating Receptive Language Acquisition in Kindergarten Students: An Action Research Study

    ERIC Educational Resources Information Center

    Hewitt, Christine L.

    2013-01-01

    Receptive language skills allow students to understand the meaning of words spoken to them. When students are unable to comprehend the majority of the words that are spoken to them, they do not have the ability to act on those words, follow given directions, build on prior knowledge, or construct adequate meaning. The inability to understand the…

  7. The Sounds of Spanish: Analysis and Application (with Special Reference to American English).

    ERIC Educational Resources Information Center

    Hammond, Robert M.

    This book is intended to be an introduction to the sound system of the Spanish language. The book is descriptive in nature and presents a true picture of the language as it is spoken by native speakers from a wide variety of dialect zones. The book is divided into five parts and 25 chapters. Part one, Phonetics and Phonology," includes the…

  8. Analysis of the Socio-Demographic Structure and Thoughts of the Turkish Bilingual Children on "Bilingualism" in Germany

    ERIC Educational Resources Information Center

    Kabadayi, Abdulkadir

    2008-01-01

    Two thirds of the world's population is bilingual to some extent, hence from an international perspective speaking more than one language is the norm rather than the exception. Germany, Bulgaria, Iraq are only a few of the countries where more than one language is widely spoken. This study aims to analyse the socio-demographic structure and second…

  9. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  10. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network.

    PubMed

    Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus

    2017-01-01

    Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.

  11. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

    PubMed Central

    Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus

    2017-01-01

    Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension. PMID:29249945

  12. A Look into the Crystal Ball for Children Who Are Deaf or Hard of Hearing: Needs, Opportunities, and Challenges.

    PubMed

    Yoshinaga-Itano, Christine; Wiggin, Mallene

    2016-11-01

    Hearing is essential for the development of speech, spoken language, and listening skills. Children previously went undiagnosed with hearing loss until they were 2.5 or 3 years of age. The auditory deprivation during this critical period of development significantly impacted long-term listening and spoken language outcomes. Due to the advent of universal newborn hearing screening, the average age of diagnosis has dropped to the first few months of life, which sets the stage for outcomes that include children with speech, spoken language, and auditory skill testing in the normal range. However, our work is not finished. The future holds even greater possibilities for children with hearing loss. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. Win-win: advancing written language knowledge and practice through university clinics.

    PubMed

    Katz, Lauren A; Fallon, Karen A

    2015-02-01

    Speech-language pathologists (SLPs) are uniquely suited for assessing and treating individuals with both spoken and written language disorders. Yet as students move from the elementary grades into the middle and high school grades, SLPs tend to provide fewer direct language services to them. Although spoken language disorders become written language disorders, SLP are not receiving sufficient training in the area of written language, and this is reflected in the extent to which they believe they have the knowledge and skills to provide services to struggling readers and writers on their caseloads. In this article, we discuss these problems and present effective methods for addressing them. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  14. Sundanese Complementation

    ERIC Educational Resources Information Center

    Kurniawan, Eri

    2013-01-01

    The focus of this thesis is the description and analysis of clausal complementation in Sundanese, an Austronesian language spoken in Indonesia. The thesis examined a range of clausal complement types in Sundanese, which consists of (i) "yen/(wi)rehna" "that" complements, (ii) "pikeun" "for" complements,…

  15. Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition

    PubMed Central

    Lupyan, Gary

    2015-01-01

    Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most “arbitrary” spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages. PMID:26340349

  16. Project ASPIRE: Spoken Language Intervention Curriculum for Parents of Low-socioeconomic Status and Their Deaf and Hard-of-Hearing Children.

    PubMed

    Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen

    2016-02-01

    To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged < 4.5 years, hearing loss of ≥ 30 dB, between 500 and 4000 Hz, using at least one amplification device with adequate amplification (hearing aid, cochlear implant, osseo-integrated device). Behavioral. Caregiver-directed educational intervention curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.

  17. [Hemispheric asymmetry modulation for language processing in aging: meta-analysis of studies using the dichotic listening test].

    PubMed

    Vanhoucke, Elodie; Cousin, Emilie; Baciu, Monica

    2013-03-01

    Growing evidence suggests that age impacts on interhemispheric representation of language. Dichotic listening test allows assessing language lateralization for spoken language and it generally reveals right-ear/left-hemisphere (LH) predominance for language in young adult subjects. According to reported results, elderly would display increasing LH predominance in some studies or stable LH language lateralization for language in others ones. The aim of this study was to depict the main pattern of results in respect with the effect of normal aging on the hemisphere specialization for language by using dichotic listening test. A meta-analysis based on 11 studies has been performed. The inter-hemisphere asymmetry does not seem to increase according to age. A supplementary qualitative analysis suggests that right-ear advantage seems to increase between 40 and 49 y old and becomes stable or decreases after 55 y old, suggesting right-ear/LH decline.

  18. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    PubMed Central

    Hirschmüller, Sarah; Egloff, Boris

    2016-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135

  19. Grammar of Kove: An Austronesian Language of the West New Britain Province, Papua New Guinea

    ERIC Educational Resources Information Center

    Sato, Hiroko

    2013-01-01

    This dissertation is a descriptive grammar of Kove, an Austronesian language spoken in the West New Britain Province of Papua New Guinea. Kove is primarily spoken in 18 villages, including some on the small islands north of New Britain. There are about 9,000 people living in the area, but many are not fluent speakers of Kove. The dissertation…

  20. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  1. The Role of Situational Authenticity in English Language Textbooks

    ERIC Educational Resources Information Center

    Chan, Jim Yee Him

    2013-01-01

    This study assesses the extent to which situational authenticity has been implemented in three commercial ELT textbook series in Hong Kong, which are designed to reflect the local sociolinguistic setting. The analysis involved the quantification and categorization of both written and spoken texts in the textbooks. The results of this analysis were…

  2. Sasak Voice

    ERIC Educational Resources Information Center

    Asikin-Garmager, Eli Scott

    2017-01-01

    This dissertation provides a formal and functional analysis of grammatical voice in Sasak, an Austronesian language spoken in Eastern Indonesia. The research addresses two primary questions, which are (1) how does Sasak clause structure and morphosyntax vary across dialects? and (2) what shapes speakers' syntactic production, namely grammatical…

  3. Writing Signed Languages: What For? What Form?

    PubMed

    Grushkin, Donald A

    2017-01-01

    Signed languages around the world have tended to maintain an "oral," unwritten status. Despite the advantages of possessing a written form of their language, signed language communities typically resist and reject attempts to create such written forms. The present article addresses many of the arguments against written forms of signed languages, and presents the potential advantages of writing signed languages. Following a history of the development of writing in spoken as well as signed language populations, the effects of orthographic types upon literacy and biliteracy are explored. Attempts at writing signed languages have followed two primary paths: "alphabetic" and "icono-graphic." It is argued that for greatest congruency and ease in developing biliteracy strategies in societies where an alphabetic script is used for the spoken language, signed language communities within these societies are best served by adoption of an alphabetic script for writing their signed language.

  4. The road to language learning is iconic: evidence from British Sign Language.

    PubMed

    Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella

    2012-12-01

    An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.

  5. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    PubMed Central

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980

  6. Verbal redundancy aids memory for filmed entertainment dialogue.

    PubMed

    Hinkin, Michael P; Harris, Richard J; Miranda, Andrew T

    2014-01-01

    Three studies investigated the effects of presentation modality and redundancy of verbal content on recognition memory for entertainment film dialogue. U.S. participants watched two brief movie clips and afterward answered multiple-choice questions about information from the dialogue. Experiment 1 compared recognition memory for spoken dialogue in the native language (English) with subtitles in English, French, or no subtitles. Experiment 2 compared memory for material in English subtitles with spoken dialogue in English, French, or no sound. Experiment 3 examined three control conditions with no spoken or captioned material in the native language. All participants watched the same video clips and answered the same questions. Performance was consistently good whenever English dialogue appeared in either the subtitles or sound, and best of all when it appeared in both, supporting the facilitation of verbal redundancy. Performance was also better when English was only in the subtitles than when it was only spoken. Unexpectedly, sound or subtitles in an unfamiliar language (French) modestly improved performance, as long as there was also a familiar channel. Results extend multimedia research on verbal redundancy for expository material to verbal information in entertainment media.

  7. Lesion localization of speech comprehension deficits in chronic aphasia

    PubMed Central

    Binder, Jeffrey R.; Humphries, Colin; Gross, William L.; Book, Diane S.

    2017-01-01

    Objective: Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Methods: Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. Results: ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Conclusions: Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. PMID:28179469

  8. Lesion localization of speech comprehension deficits in chronic aphasia.

    PubMed

    Pillay, Sara B; Binder, Jeffrey R; Humphries, Colin; Gross, William L; Book, Diane S

    2017-03-07

    Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. © 2017 American Academy of Neurology.

  9. The sounds of sarcasm in English and Cantonese: A cross-linguistic production and perception study

    NASA Astrophysics Data System (ADS)

    Cheang, Henry S.

    Three studies were conducted to examine the acoustic markers of sarcasm in English and in Cantonese, and the manner in which such markers are perceived across these languages. The first study consisted of acoustic analyses of sarcastic utterances spoken in English to verify whether particular prosodic cues correspond to English sarcastic speech. Native English speakers produced utterances expressing sarcasm, sincerity, humour, or neutrality. Measures taken from each utterance included fundamental frequency (F0), amplitude, speech rate, harmonics-to-noise ratio (HNR, to probe voice quality), and one-third octave spectral values (to probe resonance). The second study was conducted to ascertain whether specific acoustic features marked sarcasm in Cantonese and how such features compare with English sarcastic prosody. The elicitation and acoustic analysis methods from the first study were applied to similarly-constructed Cantonese utterances spoken by native Cantonese speakers. Direct acoustic comparisons between Cantonese and English sarcasm exemplars were also made. To further test for cross-linguistic prosodic cues of sarcasm and to assess whether sarcasm could be conveyed across languages, a cross-linguistic perceptual study was then performed. A subset of utterances from the first two studies was presented to naive listeners fluent in either Cantonese or English. Listeners had to identify the attitude in each utterance regardless of language of presentation. Sarcastic utterances in English (regardless of text) were marked by lower mean F0 and reductions in HNR and F0 standard deviation (relative to comparison attitudes). Resonance changes, reductions in both speech rate and F0 range signalled sarcasm in conjunction with some vocabulary terms. By contrast, higher mean F0, amplitude range reductions, and F0 range restrictions corresponded with sarcastic utterances spoken in Cantonese regardless of text. For Cantonese, reduced speech rate and higher HNR interacted with certain vocabulary to mark sarcasm. Sarcastic prosody was most distinguished from acoustic features corresponding to sincere utterances in both languages. Direct English-Cantonese comparisons between sarcasm tokens confirmed cross-linguistic differences in sarcastic prosody. Finally, Cantonese and English listeners could identify sarcasm in their native languages but identified sarcastic utterances spoken in the unfamiliar language at chance levels. It was concluded that particular acoustic cues marked sarcastic speech in Cantonese and English, and these patterns of sarcastic prosody were specific to each language.

  10. The Bilingual Language Interaction Network for Comprehension of Speech*

    PubMed Central

    Marian, Viorica

    2013-01-01

    During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602

  11. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    PubMed

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  12. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    PubMed

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  13. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    PubMed

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  14. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    ERIC Educational Resources Information Center

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  15. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    ERIC Educational Resources Information Center

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  16. Limited english proficiency, primary language at home, and disparities in children's health care: how language barriers are measured matters.

    PubMed

    Flores, Glenn; Abreu, Milagros; Tomany-Korman, Sandra C

    2005-01-01

    Approximately 3.5 million U.S. schoolchildren are limited in English proficiency (LEP). Disparities in children's health and health care are associated with both LEP and speaking a language other than English at home, but prior research has not examined which of these two measures of language barriers is most useful in examining health care disparities. Our objectives were to compare primary language spoken at home vs. parental LEP and their associations with health status, access to care, and use of health services in children. We surveyed parents at urban community sites in Boston, asking 74 questions on children's health status, access to health care, and use of health services. Some 98% of the 1,100 participating children and families were of non-white race/ethnicity, 72% of parents were LEP, and 13 different primary languages were spoken at home. "Dose-response" relationships were observed between parental English proficiency and several child and parental sociodemographic features, including children's insurance coverage, parental educational attainment, citizenship and employment, and family income. Similar "dose-response" relationships were noted between the primary language spoken at home and many but not all of the same sociodemographic features. In multivariate analyses, LEP parents were associated with triple the odds of a child having fair/poor health status, double the odds of the child spending at least one day in bed for illness in the past year, and significantly greater odds of children not being brought in for needed medical care for six of nine access barriers to care. None of these findings were observed in analyses of the primary language spoken at home. Individual parental LEP categories were associated with different risks of adverse health status and outcomes. Parental LEP is superior to the primary language spoken at home as a measure of the impact of language barriers on children's health and health care. Individual parental LEP categories are associated with different risks of adverse outcomes in children's health and health care. Consistent data collection on parental English proficiency and referral of LEP parents to English classes by pediatric providers have the potential to contribute toward reduction and elimination of health care disparities for children of LEP parents.

  17. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  18. How Do Communication Modes of Deaf and Hard-of-Hearing Prereaders Influence Teachers' Read-Aloud Goals?

    ERIC Educational Resources Information Center

    Schwarz, Amy Louise; Guajardo, Jennifer; Hart, Rebecca

    2017-01-01

    Deaf and hard-of-hearing (DHH) literature suggests that there are different read-aloud goals for DHH prereaders based on the spoken and visual communication modes DHH prereaders use, such as: American Sign Language (ASL), simultaneously signed and spoken English (SimCom), and predominately spoken English only. To date, no studies have surveyed…

  19. Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages

    PubMed Central

    Perniss, Pamela; Thompson, Robin L.; Vigliocco, Gabriella

    2010-01-01

    Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor, perceptual, and affective experience. PMID:21833282

  20. Advances in natural language processing.

    PubMed

    Hirschberg, Julia; Manning, Christopher D

    2015-07-17

    Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.

  1. The relation between working memory and language comprehension in signers and speakers.

    PubMed

    Emmorey, Karen; Giezen, Marcel R; Petrich, Jennifer A F; Spurgeon, Erin; O'Grady Farnady, Lucinda

    2017-06-01

    This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Language and Culture in the Multi-Ethnic Community: Spoken-Language Assessment

    ERIC Educational Resources Information Center

    Matluck, Joseph H.; Mace-Matluck, Betty J.

    1975-01-01

    Describes the research approach used to develop the MAT-SEA-CAL Oral Proficiency tests designed by the authors. Language test performance depends on both language proficiency and knowledge of the culture. (TL)

  3. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    PubMed

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Experiments on Urdu Text Recognition

    NASA Astrophysics Data System (ADS)

    Mukhtar, Omar; Setlur, Srirangaraj; Govindaraju, Venu

    Urdu is a language spoken in the Indian subcontinent by an estimated 130-270 million speakers. At the spoken level, Urdu and Hindi are considered dialects of a single language because of shared vocabulary and the similarity in grammar. At the written level, however, Urdu is much closer to Arabic because it is written in Nastaliq, the calligraphic style of the Persian-Arabic script. Therefore, a speaker of Hindi can understand spoken Urdu but may not be able to read written Urdu because Hindi is written in Devanagari script, whereas an Arabic writer can read the written words but may not understand the spoken Urdu. In this chapter we present an overview of written Urdu. Prior research in handwritten Urdu OCR is very limited. We present (perhaps) the first system for recognizing handwritten Urdu words. On a data set of about 1300 handwritten words, we achieved an accuracy of 70% for the top choice, and 82% for the top three choices.

  5. Talker familiarity and spoken word recognition in school-age children*

    PubMed Central

    Levi, Susannah V.

    2014-01-01

    Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers’ voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German–English bilingual talkers and were tested on the speech of six bilinguals, three of whom were familiar. Results revealed that children do show improved spoken language processing when they are familiar with the talkers, but this improvement was limited to highly familiar lexical items. This restriction of the familiar talker advantage is attributed to differences in the representation of highly familiar and less familiar lexical items. In addition, children did not exhibit accent-general learning; despite having been exposed to German-accented talkers during training, there was no improvement for novel German-accented talkers. PMID:25159173

  6. Misunderstanding and Repair in Tactile Auslan

    ERIC Educational Resources Information Center

    Willoughby, Louisa; Manns, Howard; Iwasaki, Shimako; Bartlett, Meredith

    2014-01-01

    This article discusses ways in which misunderstandings arise in Tactile Australian Sign Language (Tactile Auslan) and how they are resolved. Of particular interest are the similarities to and differences from the same processes in visually signed and spoken conversation. This article draws on detailed conversation analysis (CA) and demonstrates…

  7. Language planning for the 21st century: revisiting bilingual language policy for deaf children.

    PubMed

    Knoors, Harry; Marschark, Marc

    2012-01-01

    For over 25 years in some countries and more recently in others, bilingual education involving sign language and the written/spoken vernacular has been considered an essential educational intervention for deaf children. With the recent growth in universal newborn hearing screening and technological advances such as digital hearing aids and cochlear implants, however, more deaf children than ever before have the potential for acquiring spoken language. As a result, the question arises as to the role of sign language and bilingual education for deaf children, particularly those who are very young. On the basis of recent research and fully recognizing the historical sensitivity of this issue, we suggest that language planning and language policy should be revisited in an effort to ensure that they are appropriate for the increasingly diverse population of deaf children.

  8. How long-term memory and accentuation interact during spoken language comprehension.

    PubMed

    Li, Xiaoqing; Yang, Yufang

    2013-04-01

    Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words). Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Towards a Sign Language Synthesizer: a Bridge to Communication Gap of the Hearing/Speech Impaired Community

    NASA Astrophysics Data System (ADS)

    Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.

    2013-12-01

    Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.

  10. Unit 802: Language Varies with Approach.

    ERIC Educational Resources Information Center

    Minnesota Univ., Minneapolis. Center for Curriculum Development in English.

    This eighth-grade language unit stresses developing the student's sensitivity to variations in language, primarily the similarities and differences between spoken and written language. Through sample lectures and discussion questions, the students are helped to form generalizations about language: that speech is the primary form of language; that…

  11. Auditory Frequency Discrimination in Children with Dyslexia

    ERIC Educational Resources Information Center

    Halliday, Lorna F.; Bishop, Dorothy V. M.

    2006-01-01

    A popular hypothesis holds that developmental dyslexia is caused by phonological processing problems and is therefore linked to difficulties in the analysis of spoken as well as written language. It has been suggested that these phonological deficits might be attributable to low-level problems in processing the temporal fine structure of auditory…

  12. Listener Reliability in Assigning Utterance Boundaries in Children's Spontaneous Speech

    ERIC Educational Resources Information Center

    Stockman, Ida J.

    2010-01-01

    Research and clinical practices often rely on an utterance unit for spoken language analysis. This paper calls attention to the problems encountered when identifying utterance boundaries in young children's spontaneous conversational speech. The results of a reliability study of utterance boundary assignment are described for 20 females with…

  13. Kanasi: A Brief Grammar Sketch.

    ERIC Educational Resources Information Center

    Pappenhagen, Ronald W.

    An outline of the grammar of Kanasi, a non-Austronesian language in the Indo-Pacific family of the Daga branch and spoken in Papua New Guinea, includes analysis of noun phrases (numerals and descriptive modifiers, genitive constructions, and adpositions); verbs (affixes; tense, aspect, and moods; and causation); predicate nominals; existential,…

  14. Authenticity and TV Shows: A Multidimensional Analysis Perspective

    ERIC Educational Resources Information Center

    Al-Surmi, Mansoor

    2012-01-01

    Television shows, especially soap operas and sitcoms, are usually considered by English as a second language practitioners as a source of authentic spoken conversational materials presumably because they reflect the linguistic features of natural conversation. However, practitioners are faced with the dilemma of how to assess whether such…

  15. Who's on First? Investigating the referential hierarchy in simple native ASL narratives.

    PubMed

    Frederiksen, Anne Therese; Mayberry, Rachel I

    2016-09-01

    Discussions of reference tracking in spoken languages often invoke some version of a referential hierarchy. In this paper, we asked whether this hierarchy applies equally well to reference tracking in a visual language, American Sign Language, or whether modality differences influence its structure. Expanding the results of previous studies, this study looked at ASL referential devices beyond nouns, pronouns, and zero anaphora. We elicited four simple narratives from eight native ASL signers, and examined how the signers tracked reference throughout their stories. We found that ASL signers follow general principles of the referential hierarchy proposed for spoken languages by using nouns for referent introductions, and zero anaphora for referent maintenance. However, we also found significant differences such as the absence of pronouns in the narratives, despite their existence in ASL, and differential use of verbal and constructed action zero anaphora. Moreover, we found that native signers' use of classifiers varied with discourse status in a way that deviated from our expectations derived from the referential hierarchy for spoken languages. On this basis, we propose a tentative hierarchy of referential expressions for ASL that incorporates modality specific referential devices.

  16. Gesture production and comprehension in children with specific language impairment.

    PubMed

    Botting, Nicola; Riches, Nicholas; Gaynor, Marguerite; Morgan, Gary

    2010-03-01

    Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups.

  17. Brain metabolism of children with profound deafness: a visual language activation study by 18F-fluorodeoxyglucose positron emission tomography.

    PubMed

    Fujiwara, Keizo; Naito, Yasushi; Senda, Michio; Mori, Toshiko; Manabe, Tomoko; Shinohara, Shogo; Kikuchi, Masahiro; Hori, Shin-Ya; Tona, Yosuke; Yamazaki, Hiroshi

    2008-04-01

    The use of fluorodeoxyglucose positron emission tomography (FDG-PET) with a visual language task provided objective information on the development and plasticity of cortical language networks. This approach could help individuals involved in the habilitation and education of prelingually deafened children to decide upon the appropriate mode of communication. To investigate the cortical processing of the visual component of language and the effect of deafness upon this activity. Six prelingually deafened children participated in this study. The subjects were numbered 1-6 in the order of their spoken communication skills. In the time period between an intravenous injection of 370 MBq 18F-FDG and PET scanning of the brain, each subject was instructed to watch a video of the face of a speaking person. The cortical radioactivity of each deaf child was compared with that of a group of normal- hearing adults using a t test in a basic SPM2 model. The widest bilaterally activated cortical area was detected in subject 1, who was the worst user of spoken language. By contrast, there was no significant difference between subject 6, who was the best user of spoken language with a hearing aid, and the normal hearing group.

  18. Semantic fluency in deaf children who use spoken and signed language in comparison with hearing peers

    PubMed Central

    Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2017-01-01

    Abstract Background Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language delays. Aims We compared deaf and hearing children's performance on a semantic fluency task. Optimal performance on this task requires a systematic search of the mental lexicon, the retrieval of words within a subcategory and, when that subcategory is exhausted, switching to a new subcategory. We compared retrieval patterns between groups, and also compared the responses of deaf children who used British Sign Language (BSL) with those who used spoken English. We investigated how semantic fluency performance related to children's expressive vocabulary and executive function skills, and also retested semantic fluency in the majority of the children nearly 2 years later, in order to investigate how much progress they had made in that time. Methods & Procedures Participants were deaf children aged 6–11 years (N = 106, comprising 69 users of spoken English, 29 users of BSL and eight users of Sign Supported English—SSE) compared with hearing children (N = 120) of the same age who used spoken English. Semantic fluency was tested for the category ‘animals’. We coded for errors, clusters (e.g., ‘pets’, ‘farm animals’) and switches. Participants also completed the Expressive One‐Word Picture Vocabulary Test and a battery of six non‐verbal executive function tasks. In addition, we collected follow‐up semantic fluency data for 70 deaf and 74 hearing children, nearly 2 years after they were first tested. Outcomes & Results Deaf children, whether using spoken or signed language, produced fewer items in the semantic fluency task than hearing children, but they showed similar patterns of responses for items most commonly produced, clustering of items into subcategories and switching between subcategories. Both vocabulary and executive function scores predicted the number of correct items produced. Follow‐up data from deaf participants showed continuing delays relative to hearing children 2 years later. Conclusions & Implications We conclude that semantic fluency can be used experimentally to investigate lexical organization in deaf children, and that it potentially has clinical utility across the heterogeneous deaf population. We present normative data to aid clinicians who wish to use this task with deaf children. PMID:28691260

  19. Le langage des gestes (Body Language).

    ERIC Educational Resources Information Center

    Brunet, Jean-Paul

    1985-01-01

    Body language is inseparable from spoken language, and may reflect universal behavior or be culture-specific. Photographs and videotape recordings can help the French instructor illustrate the richness of facial and body mannerisms. (MSE)

  20. A Socio-Psycholinguistic Perspective on Biliteracy: The Use of Miscue Analysis as a Culturally Relevant Assessment Tool

    ERIC Educational Resources Information Center

    Kabuto, Bobbie

    2016-01-01

    Through the presentation of two bilingual reader profiles, this article will illustrate how miscue analysis can act as a culturally relevant assessment tool as it allows for the study of reading across different spoken and written languages. The research presented in this article integrates a socio-psycholinguistic perspective to reading and a…

  1. Same Talker, Different Language: A Replication.

    ERIC Educational Resources Information Center

    Stockmal, Verna; Bond, Z. S.

    This research investigated judgments of language samples produced by bilingual speakers. In the first study, listeners judged whether two language samples produced by bilingual speakers were spoken in the same language or in two different languages. Four bilingual African talkers recorded short passages in Swahili and in their home language (Akan,…

  2. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    PubMed

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  3. Is There a Correlation between Languages Spoken and Intricate Movements of Tongue? A Comparative Study of Various Movements of Tongue among the Three Ethnic Races of Malaysia

    PubMed Central

    Nayak, Satheesha B; Awal, Mahfuzah Binti; Han, Chang Wei; Sivaram, Ganeshram; Vigneswaran, Thimesha; Choon, Tee Lian

    2016-01-01

    Introduction Tongue is mainly used for taste, chewing and in speech. In the present study, we focused on the secondary function of the tongue as to how it is used in phonetic pronunciation and linguistics and how these factors affect tongue movements. Objective To compare all possible movements of tongue among Malaysians belonging to three ethnic races and to find out if there is any link between languages spoken and ability to perform various tongue movements. Materials and Methods A total of 450 undergraduate medical students participated in the study. The students were chosen from three different races i.e. Malays, Chinese and Indians (Malaysian Indians). Data was collected from the students through a semi-structured interview following which each student was asked to demonstrate various tongue movements like protrusion, retraction, flattening, rolling, twisting, folding or any other special movements. The data obtained was first segregated and analysed according to gender, race and types and dialects of languages spoken. Results We found that most of the Malaysians were able to perform the basic movements of tongue like protrusion, flattening movements and very few were able to perform twisting and folding of the tongue. The ability to perform normal tongue movements and special movements like folding, twisting, rolling and others was higher among Indians when compared to Malay and Chinese. Conclusion Languages spoken by Indians involve detailed tongue rolling and folding in pronouncing certain words and may be the reason as to why Indians are more versatile with tongue movements as compared to the other two races amongst Malaysians. It may be a possibility that languages spoken by a person serves as a variable that increases their ability to perform special tongue movements besides influenced by the genetic makeup of a person. PMID:26894051

  4. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    PubMed Central

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading. PMID:28690507

  5. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults.

    PubMed

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  6. Analytic study of the Tadoma method: language abilities of three deaf-blind subjects.

    PubMed

    Chomsky, C

    1986-09-01

    This study reports on the linguistic abilities of 3 adult deaf-blind subjects. The subjects perceive spoken language through touch, placing a hand on the face of the speaker and monitoring the speaker's articulatory motions, a method of speechreading known as Tadoma. Two of the subjects, deaf-blind since infancy, acquired language and learned to speak through this tactile system; the third subject has used Tadoma since becoming deaf-blind at age 7. Linguistic knowledge and productive language are analyzed, using standardized tests and several tests constructed for this study. The subjects' language abilities prove to be extensive, comparing favorably in many areas with hearing individuals. The results illustrate a relatively minor effect of limited language exposure on eventual language achievement. The results also demonstrate the adequacy of the tactile sense, in these highly trained Tadoma users, for transmitting information about spoken language sufficient to support the development of language and learning to produce speech.

  7. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    PubMed

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  8. Health Literacy, Acculturation, and the Use of Preventive Oral Health Care by Somali Refugees Living in Massachusetts

    PubMed Central

    Hunter Adams, Jo; Penrose, Katherine L.; Cochran, Jennifer; Rybin, Denis; Doros, Gheorghe; Henshaw, Michelle; Paasche-Orlow, Michael

    2013-01-01

    Background This study investigated the impact of English health literacy and spoken proficiency and acculturation on preventive dental care use among Somali refugees in Massachusetts. Methods 439 adult Somalis in the U.S. ≤ 10 years ago were interviewed. English functional health literacy, dental word recognition, and spoken proficiency were measured using STOFHLA, REALD, and BEST Plus. Logistic regression tested associations of language measures with preventive dental care use. Results Without controlling for acculturation, participants with higher health literacy were 2.0 times more likely to have had preventive care (p=0.02). Subjects with higher word recognition were 1.8 times as likely to have had preventive care (p=0.04). Controlling for acculturation, these were no longer significant, and spoken proficiency was not associated with increased preventive care use. Discussion English health literacy and spoken proficiency were not associated with preventive dental care. Other factors, like acculturation, were more predictive of care use than language skills. PMID:23748902

  9. Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials.

    PubMed

    Freunberger, Dominik; Nieuwland, Mante S

    2016-09-01

    Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences ("Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day"). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Linking Language with Embodied and Teleological Representations of Action for Humanoid Cognition

    PubMed Central

    Lallee, Stephane; Madden, Carol; Hoen, Michel; Dominey, Peter Ford

    2010-01-01

    The current research extends our framework for embodied language and action comprehension to include a teleological representation that allows goal-based reasoning for novel actions. The objective of this work is to implement and demonstrate the advantages of a hybrid, embodied-teleological approach to action–language interaction, both from a theoretical perspective, and via results from human–robot interaction experiments with the iCub robot. We first demonstrate how a framework for embodied language comprehension allows the system to develop a baseline set of representations for processing goal-directed actions such as “take,” “cover,” and “give.” Spoken language and visual perception are input modes for these representations, and the generation of spoken language is the output mode. Moving toward a teleological (goal-based reasoning) approach, a crucial component of the new system is the representation of the subcomponents of these actions, which includes relations between initial enabling states, and final resulting states for these actions. We demonstrate how grammatical categories including causal connectives (e.g., because, if–then) can allow spoken language to enrich the learned set of state-action-state (SAS) representations. We then examine how this enriched SAS inventory enhances the robot's ability to represent perceived actions in which the environment inhibits goal achievement. The paper addresses how language comes to reflect the structure of action, and how it can subsequently be used as an input and output vector for embodied and teleological aspects of action. PMID:20577629

  11. Saving a Language with Computers, Tape Recorders, and Radio.

    ERIC Educational Resources Information Center

    Bennett, Ruth

    This paper discusses the use of technology in instruction. It begins by examining research on technology and indigenous languages, focusing on the use of technology to get community attention for an indigenous language, improve the quantity of quality language, document spoken language, create sociocultural learning contexts, improve study skills,…

  12. Signs of Change: Contemporary Attitudes to Australian Sign Language

    ERIC Educational Resources Information Center

    Slegers, Claudia

    2010-01-01

    This study explores contemporary attitudes to Australian Sign Language (Auslan). Since at least the 1960s, sign languages have been accepted by linguists as natural languages with all of the key ingredients common to spoken languages. However, these visual-spatial languages have historically been subject to ignorance and myth in Australia and…

  13. Micro Language Planning and Cultural Renaissance in Botswana

    ERIC Educational Resources Information Center

    Alimi, Modupe M.

    2016-01-01

    Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…

  14. A GRAMMAR OF SPOKEN CHINESE.

    ERIC Educational Resources Information Center

    CHAO, YUEN REN

    THE AUTHOR OF THIS GRAMMAR STATES THAT THIS IS A "DISCUSSION BOOK" AND NOT AN INSTRUCTION BOOK FOR LEARNING CHINESE. HIS ANALYSIS OF CHINESE GRAMMAR IS BASED ON CURRENT LINGUISTIC METHODS AND ASSUMES THE READER HAS SOME KNOWLEDGE OF LINGUISTICS. THIS BOOK CONSTITUTES A REFERENCE WORK FOR LINGUISTS AND STUDENTS OF THE CHINESE LANGUAGE. MAJOR…

  15. The Mastersingers: Language and Practice in an Operatic Masterclass

    ERIC Educational Resources Information Center

    Atkinson, Paul

    2013-01-01

    The paper presents a microethnographic examination of an operatic masterclass, based on a transcribed video recording of just one such class. It is a companion piece to a more generalised ethnographic account of such masterclasses as pedagogic events. The detailed analysis demonstrates the close relationship between spoken and unspoken actions in…

  16. The Poetic Conventions of Tina Sambal. Special Monograph Issue, Number 27.

    ERIC Educational Resources Information Center

    Goschnick, Hella Eleonore

    this study, an analysis of poetry in Tina Sambal, a Philippine language spoken by about 70,000 persons in northern Zambales province, looks at the characteristics of the different poetic genres and poetic conventions followed in writing songs and poems. Primary emphasis is on phonological restrictions, grammatical liberties, and semantic…

  17. A Grammar of Northern and Southern Gumuz

    ERIC Educational Resources Information Center

    Ahland, Colleen Anne

    2012-01-01

    Gumuz is a Nilo-Saharan dialect cluster spoken in the river valleys of northwestern Ethiopia and the southeastern part of the Republic of the Sudan. There are approximately 200,000 speakers, the majority of which reside in Ethiopia. This study is a phonological and grammatical analysis of two main dialects/languages: Northern Gumuz and Southern…

  18. Establishing the Validity of the Task-Based English Speaking Test (TBEST) for International Teaching Assistants

    ERIC Educational Resources Information Center

    Witt, Autumn Song

    2010-01-01

    This dissertation follows an oral language assessment tool from initial design and implementation to validity analysis. The specialized variables of this study are the population: international teaching assistants and the purpose: spoken assessment as a hiring prerequisite. However, the process can easily be applied to other populations and…

  19. Naturalistic Observations of Elicited Expressive Communication of Children with Autism: An Analysis of Teacher Instructions

    ERIC Educational Resources Information Center

    Chiang, Hsu-Min

    2009-01-01

    This study observed expressive communication of 17 Australian and 15 Taiwanese children with autism who were mute or had limited spoken language during 2 hour regular school routines and analyzed teacher instructions associated with elicited expressive communication. Results indicated: (a) the frequency of occurrence of elicited expressive…

  20. The Influences of Child Intelligibility and Rate on Caregiver Responses to Toddlers With and Without Cleft Palate.

    PubMed

    Frey, Jennifer R; Kaiser, Ann P; Scherer, Nancy J

    2018-02-01

    The purpose of this study was to investigate the influences of child speech intelligibility and rate on caregivers' linguistic responses. This study compared the language use of children with cleft palate with or without cleft lip (CP±L) and their caregivers' responses. Descriptive analyses of children's language and caregivers' responses and a multilevel analysis of caregiver responsivity were conducted to determine whether there were differences in children's productive language and caregivers' responses to different types of child utterances. Play-based caregiver-child interactions were video recorded in a clinic setting. Thirty-eight children (19 toddlers with nonsyndromic repaired CP±L and 19 toddlers with typical language development) between 17 and 37 months old and their primary caregivers participated. Child and caregiver measures were obtained from transcribed and coded video recordings and included the rate, total number of words, and number of different words spoken by children and their caregivers, intelligibility of child utterances, and form of caregiver responses. Findings from this study suggest caregivers are highly responsive to toddlers' communication attempts, regardless of the intelligibility of those utterances. However, opportunities to respond were fewer for children with CP±L. Significant differences were observed in children's intelligibility and productive language and in caregivers' use of questions in response to unintelligible utterances of children with and without CP±L. This study provides information about differences in children with CP±L's language use and caregivers' responses to spoken language of toddlers with and without CP±L.

  1. Semiotic diversity in utterance production and the concept of ‘language’

    PubMed Central

    Kendon, Adam

    2014-01-01

    Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics. PMID:25092661

  2. Phonological awareness: explicit instruction for young deaf and hard-of-hearing children.

    PubMed

    Miller, Elizabeth M; Lederberg, Amy R; Easterbrooks, Susan R

    2013-04-01

    The goal of this study was to explore the development of spoken phonological awareness for deaf and hard-of-hearing children (DHH) with functional hearing (i.e., the ability to access spoken language through hearing). Teachers explicitly taught five preschoolers the phonological awareness skills of syllable segmentation, initial phoneme isolation, and rhyme discrimination in the context of a multifaceted emergent literacy intervention. Instruction occurred in settings where teachers used simultaneous communication or spoken language only. A multiple-baseline across skills design documented a functional relation between instruction and skill acquisition for those children who did not have the skills at baseline with one exception; one child did not meet criteria for syllable segmentation. These results were confirmed by changes on phonological awareness tests that were administered at the beginning and end of the school year. We found that DHH children who varied in primary communication mode, chronological age, and language ability all benefited from explicit instruction in phonological awareness.

  3. Analysis of Classroom Discourse and Instructor's Awareness of Speech in a Spanish College Course

    ERIC Educational Resources Information Center

    Rondon-Pari, Graziela

    2011-01-01

    This is an instrumental case study and consists of the description, analysis and interpretation of the teaching practices of a Spanish college instructor. The aim of this research project is to find answers to the following questions: What is the ratio of L1-L2 spoken in class? For what functions is each language used? And, are there…

  4. The Influence of Linguistic Proficiency on Masked Text Recognition Performance in Adults With and Without Congenital Hearing Impairment.

    PubMed

    Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo

    2016-01-01

    The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.

  5. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    PubMed

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  6. The acceleration of spoken-word processing in children's native-language acquisition: an ERP cohort study.

    PubMed

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hagiwara, Hiroko

    2011-04-01

    Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children's native-language acquisition. In a picture-word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n=80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children's onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    PubMed Central

    Geers, Ann E.; Nicholas, Johanna G.

    2013-01-01

    Purpose To determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12–38 months of age. Relative advantages of receiving a bilateral CI after age 4.5, better pre-CI aided hearing, and longer CI experience were also examined. Method Sixty children participated in a prospective longitudinal study of outcomes at 4.5 and 10.5 years of age. Twenty-nine children received a sequential second CI. Test scores were compared to normative samples of hearing age-mates and predictors of outcomes identified. Results Standard scores on language tests at 10.5 years of age remained significantly correlated with age of first cochlear implantation. Scores were not associated with receipt of a second, sequentially-acquired CI. Significantly higher scores were achieved for vocabulary as compared with overall language, a finding not evident when the children were tested at younger ages. Conclusion Age-appropriate spoken language skills continued to be more likely with younger AOI, even after an average of 8.6 years of additional CI use. Receipt of a second implant between ages 4–10 years and longer duration of device use did not provide significant added benefit. PMID:23275406

  8. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers with Down Syndrome

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth

    2015-01-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…

  9. "Jaja" in Spoken German: Managing Knowledge Expectations

    ERIC Educational Resources Information Center

    Taleghani-Nikazm, Carmen; Golato, Andrea

    2016-01-01

    In line with the other contributions to this issue on teaching pragmatics, this paper provides teachers of German with a two-day lesson plan for integrating authentic spoken language and its associated cultural background into their teaching. Specifically, the paper discusses how "jaja" and its phonetic variants are systematically used…

  10. Individual Differences in Inhibitory Control Relate to Bilingual Spoken Word Processing

    ERIC Educational Resources Information Center

    Mercier, Julie; Pivneva, Irina; Titone, Debra

    2014-01-01

    We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., "field") and looked at pictures corresponding to the target, a within-language competitor…

  11. Componential Skills in Second Language Development of Bilingual Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Verhoeven, Ludo; Steenge, Judit; van Leeuwe, Jan; van Balkom, Hans

    2017-01-01

    In this study, we investigated which componential skills can be distinguished in the second language (L2) development of 140 bilingual children with specific language impairment in the Netherlands, aged 6-11 years, divided into 3 age groups. L2 development was assessed by means of spoken language tasks representing different language skills…

  12. Second Language Research Forum Colloquia 2009: Colloquium--Language Learning Abroad: Insights from the Missionary Experience

    ERIC Educational Resources Information Center

    Hansen, Lynne

    2011-01-01

    Recent years have brought increasing attention to studies of language acquisition in a country where the language is spoken, as opposed to formal language study in classrooms. Research on language learners in immersion contexts is important, as the question of whether study abroad is valuable is still somewhat controversial among researchers…

  13. Living Language and Culture: Concordia Language Villages--One Example of Learning outside the Classroom

    ERIC Educational Resources Information Center

    Phillippe, Denise E.

    2012-01-01

    At Concordia Language Villages, language and culture are inextricably intertwined, as they are in life. Participants "live" and "do" language and culture 16 hours per day. The experiential, residential setting immerses the participants in the culture of the country or countries where the target language is spoken through food,…

  14. Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language

    ERIC Educational Resources Information Center

    Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary

    2015-01-01

    Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…

  15. Alsatian versus Standard German: Regional Language Bilingual Primary Education in Alsace

    ERIC Educational Resources Information Center

    Harrison, Michelle Anne

    2016-01-01

    This article examines the current situation of regional language bilingual primary education in Alsace and contends that the regional language presents a special case in the context of France. The language comprises two varieties: Alsatian, which traditionally has been widely spoken, and Standard German, used as the language of reference and…

  16. Language and Literacy: The Case of India.

    ERIC Educational Resources Information Center

    Sridhar, Kamal K.

    Language and literacy issues in India are reviewed in terms of background, steps taken to combat illiteracy, and some problems associated with literacy. The following facts are noted: India has 106 languages spoken by more than 685 million people, there are several minor script systems, a major language has different dialects, a language may use…

  17. English Language Learners. What Works Clearinghouse Topic Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2007

    2007-01-01

    English language learners are students with a primary language other than English who have a limited range of speaking, reading, writing, and listening skills in English. English language learners also include students identified and determined by their school as having limited English proficiency and a language other than English spoken in the…

  18. Multiple Languages and the School Curriculum: Experiences from Tanzania

    ERIC Educational Resources Information Center

    Mushi, Selina Lesiaki Prosper

    2012-01-01

    This is a research report on children's use of multiple languages and the school curriculum. The study explored factors that trigger use of, and fluency in, multiple languages; and how fluency in multiple languages relates to thought processes and school performance. Advantages and disadvantages of using only one of the languages spoken were…

  19. Beyond Languages, beyond Modalities: Transforming the Study of Semiotic Repertoires

    ERIC Educational Resources Information Center

    Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina

    2017-01-01

    This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…

  20. Spanish as a Second Language when L1 Is Quechua: Endangered Languages and the SLA Researcher

    ERIC Educational Resources Information Center

    Kalt, Susan E.

    2012-01-01

    Spanish is one of the most widely spoken languages in the world. Quechua is the largest indigenous language family to constitute the first language (L1) of second language (L2) Spanish speakers. Despite sheer number of speakers and typologically interesting contrasts, Quechua-Spanish second language acquisition is a nearly untapped research area,…

  1. Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success

    PubMed Central

    Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.

    2013-01-01

    The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625

  2. Vocal interaction between children with Down syndrome and their parents.

    PubMed

    Thiemann-Bourque, Kathy S; Warren, Steven F; Brady, Nancy; Gilkerson, Jill; Richards, Jeffrey A

    2014-08-01

    The purpose of this study was to describe differences in parent input and child vocal behaviors of children with Down syndrome (DS) compared with typically developing (TD) children. The goals were to describe the language learning environments at distinctly different ages in early childhood. Nine children with DS and 9 age-matched TD children participated; 4 children in each group were ages 9-11 months, and 5 were between 25 and 54 months. Measures were derived from automated vocal analysis. A digital language processor measured the richness of the child's language environment, including number of adult words, conversational turns, and child vocalizations. Analyses indicated no significant differences in words spoken by parents of younger versus older children with DS and significantly more words spoken by parents of TD children than parents of children with DS. Differences between the DS and TD groups were observed in rates of all vocal behaviors, with no differences noted between the younger versus older children with DS, and the younger TD children did not vocalize significantly more than the younger DS children. Parents of children with DS continue to provide consistent levels of input across the early language learning years; however, child vocal behaviors remain low after the age of 24 months, suggesting the need for additional and alternative intervention approaches.

  3. Semiotic diversity in utterance production and the concept of 'language'.

    PubMed

    Kendon, Adam

    2014-09-19

    Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  4. Iconic Factors and Language Word Order

    ERIC Educational Resources Information Center

    Moeser, Shannon Dawn

    1975-01-01

    College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)

  5. The Bilingual Language Interaction Network for Comprehension of Speech

    ERIC Educational Resources Information Center

    Shook, Anthony; Marian, Viorica

    2013-01-01

    During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can…

  6. Australian Aboriginal Deaf People and Aboriginal Sign Language

    ERIC Educational Resources Information Center

    Power, Des

    2013-01-01

    Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…

  7. Krio Language Manual. Revised Edition.

    ERIC Educational Resources Information Center

    Peace Corps, Freetown (Sierra Leone).

    Instructional materials for Krio, the creole spoken in Sierra Leone, are designed for Peace Corps volunteer language instruction and intended for the use of both students and instructors. Fifty-six units provide practice in language skills, particularly oral, geared to the daily language needs of volunteers. Lessons are designed for audio-lingual…

  8. Auditory Technology and Its Impact on Bilingual Deaf Education

    ERIC Educational Resources Information Center

    Mertes, Jennifer

    2015-01-01

    Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…

  9. Teaching Reading through Language. TECHNIQUES.

    ERIC Educational Resources Information Center

    Jones, Edward V.

    1986-01-01

    Because reading is first and foremost a language comprehension process focusing on the visual form of spoken language, such teaching strategies as language experience and assisted reading have much to offer beginning readers. These techniques have been slow to become accepted by many adult literacy instructors; however, the two strategies,…

  10. Iran: Country Status Report.

    ERIC Educational Resources Information Center

    McFerren, Margaret

    A survey of the status of language usage in Iran begins with an overview of the usage pattern of Persian, the official language spoken by just over half the population, and the competing languages of six ethnic and linguistic minorities: Azerbaijani, Kurdish, Arabic, Gilaki, Luri-Bakhtiari, and Mazandarani. The development of language policy…

  11. El Espanol como Idioma Universal (Spanish as a Universal Language)

    ERIC Educational Resources Information Center

    Mijares, Jose

    1977-01-01

    A proposal to transform Spanish into a universal language because it possesses the prerequisites: it is a living language, spoken in several countries; it is a natural language; and it uses the ordinary alphabet. Details on simplification and standardization are given. (Text is in Spanish.) (AMH)

  12. The Role of Pronunciation in SENCOTEN Language Revitalization

    ERIC Educational Resources Information Center

    Bird, Sonya; Kell, Sarah

    2017-01-01

    Most Indigenous language revitalization programs in Canada currently emphasize spoken language. However, virtually no research has been done on the role of pronunciation in the context of language revitalization. This study set out to gain an understanding of attitudes around pronunciation in the SENCOTEN-speaking community, in order to determine…

  13. Cultural Pluralism in Japan: A Sociolinguistic Outline.

    ERIC Educational Resources Information Center

    Honna, Nobuyuki

    1980-01-01

    Addressing the common misconception that Japan is a mono-ethnic, mono-cultural, and monolingual society, this article focuses on several areas of sociolinguistic concern. It discusses: (1) the bimodalism of the Japanese deaf population between Japanese Sign Language as native language and Japanese Spoken Language as acquired second language; (2)…

  14. Audience Effects in American Sign Language Interpretation

    ERIC Educational Resources Information Center

    Weisenberg, Julia

    2009-01-01

    There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…

  15. Language-in-Education Policies in the Catalan Language Area

    ERIC Educational Resources Information Center

    Vila i Moreno, F. Xavier

    2008-01-01

    The territories where Catalan is traditionally spoken as a native language constitute an attractive sociolinguistic laboratory which appears especially interesting from the point of view of language-in-education policies. The educational system has spearheaded the recovery of Catalan during the last 20 years. Schools are being attributed most of…

  16. The Abakua Secret Society in Cuba: Language and Culture.

    ERIC Educational Resources Information Center

    Cedeno, Rafael A. Nunez

    1988-01-01

    Reports on attempts to determine whether Cuban Abakua is a pidginized Afro-Spanish, creole, or dead language and concludes that some of this language, spoken by a secret society, has its roots in Efik, a language of the Benue-Congo, and seems to be a simple, ritualistic, structureless argot. (CB)

  17. Language Planning for Venezuela: The Role of English.

    ERIC Educational Resources Information Center

    Kelsey, Irving; Serrano, Jose

    A rationale for teaching foreign languages in Venezuelan schools is discussed. An included sociolinguistic profile of Venezuela indicates that Spanish is the sole language of internal communication needs. Other languages spoken in Venezuela serve primarily a group function among the immigrant and indigenous communities. However, the teaching of…

  18. Spelling Well Despite Developmental Language Disorder: What Makes it Possible?

    PubMed Central

    Rakhlin, Natalia; Cardoso-Martins, Cláudia; Kornilov, Sergey A.; Grigorenko, Elena L.

    2013-01-01

    The goal of the study was to investigate the overlap between Developmental Language Disorder (DLD) and Developmental Dyslexia, identified through spelling difficulties (SD), in Russian-speaking children. In particular, we studied the role of phoneme awareness (PA), rapid automatized naming (RAN), pseudoword repetition (PWR), morphological (MA) and orthographic awareness (OA) in differentiating between children with DLD who have SD from children with DLD who are average spellers by comparing the two groups to each other, to typically developing children as well as children with SD but without spoken language deficits. One hundred forty nine children, aged 10.40 to 14.00, participated in the study. The results indicated that the SD, DLD, and DLD/SD groups did not differ from each other on PA and RAN Letters and underperformed in comparison to the control groups. However, whereas the children with written language deficits (SD and DLD/SD groups) underperformed on RAN Objects and Digits, PWR, OA and MA, the children with DLD and no SD performed similarly to the children from the control groups on these measures. In contrast, the two groups with spoken language deficits (DLD and DLD/SD) underperformed on RAN Colors in comparison to the control groups and the group of children with SD only. The results support the notion that those children with DLD who have unimpaired PWR and RAN skills are able to overcome their weaknesses in spoken language and PA and acquire basic literacy on a par with their age peers with typical language. We also argue that our findings support a multifactorial model of developmental language disorders (DLD). PMID:23860907

  19. The Development of Conjunction Use in Advanced L2 Speech

    ERIC Educational Resources Information Center

    Jaroszek, Marcin

    2011-01-01

    The article discusses the results of a longitudinal study of how the use of conjunctions, as an aspect of spoken discourse competence of 13 selected advanced students of English, developed throughout their 3-year English as a foreign language (EFL) tertiary education. The analysis was carried out in relation to a number of variables, including 2…

  20. "Nous" versus "on": Pronouns with First-Person Plural Reference in Synchronous French Chat

    ERIC Educational Resources Information Center

    van Compernolle, Remi A.

    2008-01-01

    This article explores variation in the use of the pronouns "nous" and "on" for first-person plural reference in a substantial corpus of French-language Internet chat discourse. The results indicate that "on" is nearly categorically preferred to "nous," which is in line with previous research on informal spoken French. A qualitative analysis of…

  1. Bimodal Bilinguals Co-activate Both Languages during Spoken Comprehension

    PubMed Central

    Shook, Anthony; Marian, Viorica

    2012-01-01

    Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals’ and English monolinguals’ eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing items than at phonologically unrelated items, and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension. PMID:22770677

  2. Participatory arts programs in residential dementia care: Playing with language differences.

    PubMed

    Swinnen, Aagje; de Medeiros, Kate

    2017-01-01

    This article examines connections between language, identity, and cultural difference in the context of participatory arts in residential dementia care. Specifically, it looks at how language differences become instruments for the language play that characterizes the participatory arts programs, TimeSlips and the Alzheimer's Poetry Project. These are two approaches that are predominantly spoken-word driven. Although people living with dementia experience cognitive decline that affects language, they are linguistic agents capable of participating in ongoing negotiation processes of connection, belonging, and in- and exclusion through language use. The analysis of two ethnographic vignettes, based on extensive fieldwork in the closed wards of two Dutch nursing homes, illustrates how TimeSlips and the Alzheimer's Poetry Project support them in this agency. The theoretical framework of the analysis consists of literature on the linguistic agency of people living with dementia, the notions of the homo ludens (or man the player) and ludic language, as well as linguistic strategies of belonging in relation to place.

  3. [Contact and admixture-the relationship between Dongxiang population and their language viewed from Y chromosomes].

    PubMed

    Wen, Shao-Qing; Xie, Xiao-Dong; Xu, Dan

    2013-06-01

    Dongxiang is one of special ethnic groups of Gansu Province. Their language is one of the Mongolian languages of Altai language family. And their origin has long been controversial. The results of Cluster analyses (multidimensional scaling analysis, dendrograms, principal component analyses, and networks) of Dongxiang population and other ethnic groups indicated that Dongxiang people is much closer to the Central Asian ethnic groups than to the other Mongolian. Admixture analyses also confirmed the result. This suggests that Dongxiang people did not descend from Mongolian, but from the Central Asian ethnic groups that have spoken Persian or Turkic language. This mismatch between paternal genetic lineage and language classification might be explained by the elite-dominance model. The ancestral populations of Dongxiang could be the Central Asian ethnic groups assimilated by Mongolian in language and culture.

  4. Integrating Language-and-Culture Teaching: An Investigation of Spanish Teachers' Perceptions of the Objectives of Foreign Language Education

    ERIC Educational Resources Information Center

    Castro, Paloma; Sercu, Lies; Mendez Garcia, Maria del Carmen

    2004-01-01

    A recent shift has been noticeable in foreign language education theory. Previously, foreign languages were taught as a linguistic code. This then shifted to teaching that code against the sociocultural background of, primarily, one country in which the foreign language is spoken as a national language. More recently, teaching has reflected on…

  5. Relationship between the linguistic environments and early bilingual language development of hearing children in deaf-parented families.

    PubMed

    Kanto, Laura; Huttunen, Kerttu; Laakso, Marja-Leena

    2013-04-01

    We explored variation in the linguistic environments of hearing children of Deaf parents and how it was associated with their early bilingual language development. For that purpose we followed up the children's productive vocabulary (measured with the MCDI; MacArthur Communicative Development Inventory) and syntactic complexity (measured with the MLU10; mean length of the 10 longest utterances the child produced during videorecorded play sessions) in both Finnish Sign Language and spoken Finnish between the ages of 12 and 30 months. Additionally, we developed new methodology for describing the linguistic environments of the children (N = 10). Large variation was uncovered in both the amount and type of language input and language acquisition among the children. Language exposure and increases in productive vocabulary and syntactic complexity were interconnected. Language acquisition was found to be more dependent on the amount of exposure in sign language than in spoken language. This was judged to be related to the status of sign language as a minority language. The results are discussed in terms of parents' language choices, family dynamics in Deaf-parented families and optimal conditions for bilingual development.

  6. Language Shift or Increased Bilingualism in South Africa: Evidence from Census Data

    ERIC Educational Resources Information Center

    Posel, Dorrit; Zeller, Jochen

    2016-01-01

    In the post-apartheid era, South Africa has adopted a language policy that gives official status to 11 languages (English, Afrikaans, and nine Bantu languages). However, English has remained the dominant language of business, public office, and education, and some research suggests that English is increasingly being spoken in domestic settings.…

  7. A Grammar of Sierra Popoluca (Soteapanec, a Mixe-Zoquean Language)

    ERIC Educational Resources Information Center

    de Jong Boudreault, Lynda J.

    2009-01-01

    This dissertation is a comprehensive description of the grammar of Sierra Popoluca (SP, aka Soteapanec), a Mixe-Zoquean language spoken by approximately 28,000 people in Veracruz, Mexico. This grammar begins with an introduction to the language, its language family, a typological overview of the language, a brief history of my fieldwork, and the…

  8. Uncertainty in the Community Language Classroom: A Response to Michael Clyne.

    ERIC Educational Resources Information Center

    Stuart-Smith, Jane

    1997-01-01

    Response to an article on community languages in Australia supports the argument that community language speakers do not have an advantage over non-speakers in the community language classroom, but can be disadvantaged by differences between the language taught in the classroom and that spoken in homes. Examples are drawn from Punjabi instruction…

  9. Bimodal Bilinguals Co-Activate Both Languages during Spoken Comprehension

    ERIC Educational Resources Information Center

    Shook, Anthony; Marian, Viorica

    2012-01-01

    Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are…

  10. A Corpus-Based Study on Turkish Spoken Productions of Bilingual Adults

    ERIC Educational Resources Information Center

    Agçam, Reyhan; Bulut, Adem

    2016-01-01

    The current study investigated whether monolingual adult speakers of Turkish and bilingual adult speakers of Arabic and Turkish significantly differ regarding their spoken productions in Turkish. Accordingly, two groups of undergraduate students studying Turkish Language and Literature at a state university in Turkey were presented two videos on a…

  11. Effects of Prosody and Position on the Timing of Deictic Gestures

    ERIC Educational Resources Information Center

    Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M.; Szuminsky, Neil

    2013-01-01

    Purpose: In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. Method: The…

  12. Spoken English. "Educational Review" Occasional Publications Number Two.

    ERIC Educational Resources Information Center

    Wilkinson, Andrew; And Others

    Modifications of current assumptions both about the nature of the spoken language and about its functions in relation to personality development are suggested in this book. The discussion covers an explanation of "oracy" (the oral skills of speaking and listening); the contributions of linguistics to the teaching of English in Britain; the…

  13. Monitoring the Performance of Human and Automated Scores for Spoken Responses

    ERIC Educational Resources Information Center

    Wang, Zhen; Zechner, Klaus; Sun, Yu

    2018-01-01

    As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…

  14. Phonological Awareness: Explicit Instruction for Young Deaf and Hard-of-Hearing Children

    ERIC Educational Resources Information Center

    Miller, Elizabeth M.; Lederberg, Amy R.; Easterbrooks, Susan R.

    2013-01-01

    The goal of this study was to explore the development of spoken phonological awareness for deaf and hard-of-hearing children (DHH) with functional hearing (i.e., the ability to access spoken language through hearing). Teachers explicitly taught five preschoolers the phonological awareness skills of syllable segmentation, initial phoneme isolation,…

  15. Teaching English as a "Second Language" in Kenya and the United States: Convergences and Divergences

    ERIC Educational Resources Information Center

    Roy-Campbell, Zaline M.

    2015-01-01

    English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…

  16. Malagasy dialects and the peopling of Madagascar

    PubMed Central

    Serva, Maurizio; Petroni, Filippo; Volchenkov, Dima; Wichmann, Søren

    2012-01-01

    The origin of Malagasy DNA is half African and half Indonesian, nevertheless the Malagasy language, spoken by the entire population, belongs to the Austronesian family. The language most closely related to Malagasy is Maanyan (Greater Barito East group of the Austronesian family), but related languages are also in Sulawesi, Malaysia and Sumatra. For this reason, and because Maanyan is spoken by a population which lives along the Barito river in Kalimantan and which does not possess the necessary skill for long maritime navigation, the ethnic composition of the Indonesian colonizers is still unclear. There is a general consensus that Indonesian sailors reached Madagascar by a maritime trek, but the time, the path and the landing area of the first colonization are all disputed. In this research, we try to answer these problems together with other ones, such as the historical configuration of Malagasy dialects, by types of analysis related to lexicostatistics and glottochronology that draw upon the automated method recently proposed by the authors. The data were collected by the first author at the beginning of 2010 with the invaluable help of Joselinà Soafara Néré and consist of Swadesh lists of 200 items for 23 dialects covering all areas of the island. PMID:21632612

  17. Malagasy dialects and the peopling of Madagascar.

    PubMed

    Serva, Maurizio; Petroni, Filippo; Volchenkov, Dima; Wichmann, Søren

    2012-01-07

    The origin of Malagasy DNA is half African and half Indonesian, nevertheless the Malagasy language, spoken by the entire population, belongs to the Austronesian family. The language most closely related to Malagasy is Maanyan (Greater Barito East group of the Austronesian family), but related languages are also in Sulawesi, Malaysia and Sumatra. For this reason, and because Maanyan is spoken by a population which lives along the Barito river in Kalimantan and which does not possess the necessary skill for long maritime navigation, the ethnic composition of the Indonesian colonizers is still unclear. There is a general consensus that Indonesian sailors reached Madagascar by a maritime trek, but the time, the path and the landing area of the first colonization are all disputed. In this research, we try to answer these problems together with other ones, such as the historical configuration of Malagasy dialects, by types of analysis related to lexicostatistics and glottochronology that draw upon the automated method recently proposed by the authors. The data were collected by the first author at the beginning of 2010 with the invaluable help of Joselinà Soafara Néré and consist of Swadesh lists of 200 items for 23 dialects covering all areas of the island.

  18. Guest Comment: Universal Language Requirement.

    ERIC Educational Resources Information Center

    Sherwood, Bruce Arne

    1979-01-01

    Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)

  19. Discussion Forum Interactions: Text and Context

    ERIC Educational Resources Information Center

    Montero, Begona; Watts, Frances; Garcia-Carbonell, Amparo

    2007-01-01

    Computer-mediated communication (CMC) is currently used in language teaching as a bridge for the development of written and spoken skills [Kern, R., 1995. "Restructuring classroom interaction with networked computers: effects on quantity and characteristics of language production." "The Modern Language Journal" 79, 457-476]. Within CMC…

  20. Neural Network Computing and Natural Language Processing.

    ERIC Educational Resources Information Center

    Borchardt, Frank

    1988-01-01

    Considers the application of neural network concepts to traditional natural language processing and demonstrates that neural network computing architecture can: (1) learn from actual spoken language; (2) observe rules of pronunciation; and (3) reproduce sounds from the patterns derived by its own processes. (Author/CB)

  1. An Introduction to Spoken Setswana.

    ERIC Educational Resources Information Center

    Mistry, Karen S.

    A guide to instruction in Setswana, the most widely dispersed Bantu language in Southern Africa, includes general material about the language, materials for the teacher, 163 lessons, vocabulary lists, and supplementary materials and exercises. Introductory material about the language discusses its distribution and characteristics, and orthography.…

  2. Where Should We Look for Language?

    ERIC Educational Resources Information Center

    Stokoe, William C.

    1986-01-01

    Argues that the beginnings of language need to be sought not in the universal abstract grammar proposed by Chomsky but in the evolution of the everyday interaction of the human species. Studies indicate that there is no great gulf between spoken language and nonverbal communication. (SED)

  3. A Grammar of Kurtop

    ERIC Educational Resources Information Center

    Hyslop, Gwendolyn

    2011-01-01

    Kurtop is a Tibeto-Burman language spoken by approximately 15,000 people in Northeastern Bhutan. This dissertation is the first descriptive grammar of the language, based on extensive fieldwork and community-driven language documentation in Bhutan. When possible, analyses are presented in typological and historical/comparative perspectives and…

  4. The bridge of iconicity: from a world of experience to the experience of language.

    PubMed

    Perniss, Pamela; Vigliocco, Gabriella

    2014-09-19

    Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.

  5. The bridge of iconicity: from a world of experience to the experience of language

    PubMed Central

    Perniss, Pamela; Vigliocco, Gabriella

    2014-01-01

    Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication. PMID:25092668

  6. Berber Dialects. Materials Status Report.

    ERIC Educational Resources Information Center

    Center for Applied Linguistics, Washington, DC. Language/Area Reference Center.

    The materials status report for the Berber languages, minority languages spoken in northern Africa, is one of a series intended to provide the nonspecialist with a picture of the availability and quality of texts for teaching various languages to English speakers. The report consists of: (1) a brief narrative description of the Berber language,…

  7. The Unified Phonetic Transcription for Teaching and Learning Chinese Languages

    ERIC Educational Resources Information Center

    Shieh, Jiann-Cherng

    2011-01-01

    In order to preserve distinctive cultures, people anxiously figure out writing systems of their languages as recording tools. Mandarin, Taiwanese and Hakka languages are three major and the most popular dialects of Han languages spoken in Chinese society. Their writing systems are all in Han characters. Various and independent phonetic…

  8. The Knowledge and Perceptions of Prospective Teachers and Speech Language Therapists in Collaborative Language and Literacy Instruction

    ERIC Educational Resources Information Center

    Wilson, Leanne; McNeill, Brigid; Gillon, Gail T.

    2015-01-01

    Successful collaboration among speech and language therapists (SLTs) and teachers fosters the creation of communication friendly classrooms that maximize children's spoken and written language learning. However, these groups of professionals may have insufficient opportunity in their professional study to develop the shared knowledge, perceptions…

  9. Dilemmatic Aspects of Language Policies in a Trilingual Preschool Group

    ERIC Educational Resources Information Center

    Puskás, Tünde; Björk-Willén, Polly

    2017-01-01

    This article explores dilemmatic aspects of language policies in a preschool group in which three languages (Swedish, Romani and Arabic) are spoken on an everyday basis. The article highlights the interplay between policy decisions on the societal level, the teachers' interpretations of these policies, as well as language practices on the micro…

  10. Making a Difference: Language Teaching for Intercultural and International Dialogue

    ERIC Educational Resources Information Center

    Byram, Michael; Wagner, Manuela

    2018-01-01

    Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…

  11. Grammatical Processing of Spoken Language in Child and Adult Language Learners

    ERIC Educational Resources Information Center

    Felser, Claudia; Clahsen, Harald

    2009-01-01

    This article presents a selective overview of studies that have investigated auditory language processing in children and late second-language (L2) learners using online methods such as event-related potentials (ERPs), eye-movement monitoring, or the cross-modal priming paradigm. Two grammatical phenomena are examined in detail, children's and…

  12. Digital Language Death

    PubMed Central

    Kornai, András

    2013-01-01

    Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559

  13. Politeness Strategies among Native and Romanian Speakers of English

    ERIC Educational Resources Information Center

    Ambrose, Dominic

    1995-01-01

    Background: Politeness strategies vary from language to language and within each society. At times the wrong strategies can have disastrous effects. This can occur when languages are used by non-native speakers or when they are used outside of their own home linguistic context. Purpose: This study of spoken language compares the politeness…

  14. Vernacular Literacy in the Touo Language of the Solomon Islands

    ERIC Educational Resources Information Center

    Dunn, Michael

    2005-01-01

    The Touo language is a non-Austronesian language spoken on Rendova Island (Western Province, Solomon Islands). First language speakers of Touo are typically multilingual, and are likely to speak other (Austronesian) vernaculars, as well as Solomon Island Pijin and English. There is no institutional support of literacy in Touo: schools function in…

  15. Documenting Indigenous Knowledge and Languages: Research Planning & Protocol.

    ERIC Educational Resources Information Center

    Leonard, Beth

    2001-01-01

    The author's experiences of learning her heritage language of Deg Xinag, an Athabascan language spoken in Alaska, serve as a backdrop for discussing issues in learning endangered indigenous languages. When Deg Xinag is taught by linguists, obvious differences between English and Deg Xinag are not articulated, due to the lack of knowledge of…

  16. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  17. Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation

    ERIC Educational Resources Information Center

    Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy

    2016-01-01

    Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…

  18. Of Elastic Clouds and Treebanks: New Opportunities for Content-Based and Data-Driven Language Learning

    ERIC Educational Resources Information Center

    Godwin-Jones, Robert

    2008-01-01

    Creating effective electronic tools for language learning frequently requires large data sets containing extensive examples of actual human language use. Collections of authentic language in spoken and written forms provide developers the means to enrich their applications with real world examples. As the Internet continues to expand…

  19. Language and Literacy Development of Deaf and Hard-of-Hearing Children: Successes and Challenges

    ERIC Educational Resources Information Center

    Lederberg, Amy R.; Schick, Brenda; Spencer, Patricia E.

    2013-01-01

    Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to…

  20. The Pursuit of Language Appropriate Care: Remote Simultaneous Medical Interpretation Use

    ERIC Educational Resources Information Center

    Logan, Debra M.

    2010-01-01

    Background: The U.S. government mandates nurses to deliver linguistically appropriate care to hospital patients. It is difficult for nurses to implement the language mandates because there are 6,912 active living languages spoken in the world. Language barriers appear to place limited English proficient (LEP) patients at increased risk for harm…

  1. Language Education Policies and Inequality in Africa: Cross-National Empirical Evidence

    ERIC Educational Resources Information Center

    Coyne, Gary

    2015-01-01

    This article examines the relationship between inequality and education through the lens of colonial language education policies in African primary and secondary school curricula. The languages of former colonizers almost always occupy important places in society, yet they are not widely spoken as first languages, meaning that most people depend…

  2. Spoken Oral Language and Adult Struggling Readers

    ERIC Educational Resources Information Center

    Bakhtiari, Dariush; Greenberg, Daphne; Patton-Terry, Nicole; Nightingale, Elena

    2015-01-01

    Oral language is a critical component to the development of reading acquisition. Much of the research concerning the relationship between oral language and reading ability is focused on children, while there is a paucity of research focusing on this relationship for adults who struggle with their reading. Oral language as defined in this paper…

  3. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals

    PubMed Central

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000

  4. Rhythm in language acquisition.

    PubMed

    Langus, Alan; Mehler, Jacques; Nespor, Marina

    2017-10-01

    Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Singing can facilitate foreign language learning.

    PubMed

    Ludke, Karen M; Ferreira, Fernanda; Overy, Katie

    2014-01-01

    This study presents the first experimental evidence that singing can facilitate short-term paired-associate phrase learning in an unfamiliar language (Hungarian). Sixty adult participants were randomly assigned to one of three "listen-and-repeat" learning conditions: speaking, rhythmic speaking, or singing. Participants in the singing condition showed superior overall performance on a collection of Hungarian language tests after a 15-min learning period, as compared with participants in the speaking and rhythmic speaking conditions. This superior performance was statistically significant (p < .05) for the two tests that required participants to recall and produce spoken Hungarian phrases. The differences in performance were not explained by potentially influencing factors such as age, gender, mood, phonological working memory ability, or musical ability and training. These results suggest that a "listen-and-sing" learning method can facilitate verbatim memory for spoken foreign language phrases.

  6. Pronunciation difficulty, temporal regularity, and the speech-to-song illusion.

    PubMed

    Margulis, Elizabeth H; Simchy-Gross, Rhimmon; Black, Justin L

    2015-01-01

    The speech-to-song illusion (Deutsch et al., 2011) tracks the perceptual transformation from speech to song across repetitions of a brief spoken utterance. Because it involves no change in the stimulus itself, but a dramatic change in its perceived affiliation to speech or to music, it presents a unique opportunity to comparatively investigate the processing of language and music. In this study, native English-speaking participants were presented with brief spoken utterances that were subsequently repeated ten times. The utterances were drawn either from languages that are relatively difficult for a native English speaker to pronounce, or languages that are relatively easy for a native English speaker to pronounce. Moreover, the repetition could occur at regular or irregular temporal intervals. Participants rated the utterances before and after the repetitions on a 5-point Likert-like scale ranging from "sounds exactly like speech" to "sounds exactly like singing." The difference in ratings before and after was taken as a measure of the strength of the speech-to-song illusion in each case. The speech-to-song illusion occurred regardless of whether the repetitions were spaced at regular temporal intervals or not; however, it occurred more readily if the utterance was spoken in a language difficult for a native English speaker to pronounce. Speech circuitry seemed more liable to capture native and easy-to-pronounce languages, and more reluctant to relinquish them to perceived song across repetitions.

  7. Gesture, sign, and language: The coming of age of sign language and gesture studies.

    PubMed

    Goldin-Meadow, Susan; Brentari, Diane

    2017-01-01

    How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  8. The effect of written text on comprehension of spoken English as a foreign language.

    PubMed

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.

  9. A decline in prosocial language helps explain public disapproval of the US Congress.

    PubMed

    Frimer, Jeremy A; Aquino, Karl; Gebauer, Jochen E; Zhu, Luke Lei; Oakes, Harrison

    2015-05-26

    Talking about helping others makes a person seem warm and leads to social approval. This work examines the real world consequences of this basic, social-cognitive phenomenon by examining whether record-low levels of public approval of the US Congress may, in part, be a product of declining use of prosocial language during Congressional debates. A text analysis of all 124 million words spoken in the House of Representatives between 1996 and 2014 found that declining levels of prosocial language strongly predicted public disapproval of Congress 6 mo later. Warm, prosocial language still predicted public approval when removing the effects of societal and global factors (e.g., the September 11 attacks) and Congressional efficacy (e.g., passing bills), suggesting that prosocial language has an independent, direct effect on social approval.

  10. Three Factors Are Critical in Order to Synthesize Intelligible Noise-Vocoded Japanese Speech

    PubMed Central

    Kishida, Takuya; Nakajima, Yoshitaka; Ueda, Kazuo; Remijn, Gerard B.

    2016-01-01

    Factor analysis (principal component analysis followed by varimax rotation) had shown that 3 common factors appear across 20 critical-band power fluctuations derived from spoken sentences of eight different languages [Ueda et al. (2010). Fechner Day 2010, Padua]. The present study investigated the contributions of such power-fluctuation factors to speech intelligibility. The method of factor analysis was modified to obtain factors suitable for resynthesizing speech sounds as 20-critical-band noise-vocoded speech. The resynthesized speech sounds were used for an intelligibility test. The modification of factor analysis ensured that the resynthesized speech sounds were not accompanied by a steady background noise caused by the data reduction procedure. Spoken sentences of British English, Japanese, and Mandarin Chinese were subjected to this modified analysis. Confirming the earlier analysis, indeed 3–4 factors were common to these languages. The number of power-fluctuation factors needed to make noise-vocoded speech intelligible was then examined. Critical-band power fluctuations of the Japanese spoken sentences were resynthesized from the obtained factors, resulting in noise-vocoded-speech stimuli, and the intelligibility of these speech stimuli was tested by 12 native Japanese speakers. Japanese mora (syllable-like phonological unit) identification performances were measured when the number of factors was 1–9. Statistically significant improvement in intelligibility was observed when the number of factors was increased stepwise up to 6. The 12 listeners identified 92.1% of the morae correctly on average in the 6-factor condition. The intelligibility improved sharply when the number of factors changed from 2 to 3. In this step, the cumulative contribution ratio of factors improved only by 10.6%, from 37.3 to 47.9%, but the average mora identification leaped from 6.9 to 69.2%. The results indicated that, if the number of factors is 3 or more, elementary linguistic information is preserved in such noise-vocoded speech. PMID:27199790

  11. Family Language Policy and School Language Choice: Pathways to Bilingualism and Multilingualism in a Canadian Context

    ERIC Educational Resources Information Center

    Slavkov, Nikolay

    2017-01-01

    This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…

  12. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    DTIC Science & Technology

    2016-05-03

    using additional English resources. 2. Background The Babel program1 is an international collaborative effort sponsored by the US Intelligence Advanced...phenomenon is not as well studied for English / African language pairs, but some results are available8,9. 3. Experimental Setup The Swahili analysis...word pronunciations. From the analysis it was concluded that in most cases English words were pronounced using standard English letter-to-sound rules

  13. Key Data on Teaching Languages at School in Europe. 2017 Edition. Eurydice Report

    ERIC Educational Resources Information Center

    Baïdak, Nathalie; Balcon, Marie-Pascale; Motiejunaite, Akvile

    2017-01-01

    Linguistic diversity is part of Europe's DNA. It embraces not only the official languages of Member States, but also the regional and/or minority languages spoken for centuries on European territory, as well as the languages brought by the various waves of migrants. The coexistence of this variety of languages constitutes an asset, but it is also…

  14. The Impact of Biculturalism on Language and Literacy Development: Teaching Chinese English Language Learners

    ERIC Educational Resources Information Center

    Palmer, Barbara C.; Chen, Chia-I; Chang, Sara; Leclere, Judith T.

    2006-01-01

    According to the 2000 United States Census, Americans age five and older who speak a language other than English at home grew 47 percent over the preceding decade. This group accounts for slightly less than one in five Americans (17.9%). Among the minority languages spoken in the United States, Asian-language speakers, including Chinese and other…

  15. Working with the Bilingual Child Who Has a Language Delay. Meeting Learning Challenges

    ERIC Educational Resources Information Center

    Greenspan, Stanley I.

    2005-01-01

    It is very important to determine if a bilingual child's language delay is simply in English or also in the child's native language. Understandably, many children have higher levels of language development in the language spoken at home. To discover if this is the case, observe the child talking with his parents. Sometimes, even without…

  16. Mapudungun According to Its Speakers: Mapuche Intellectuals and the Influence of Standard Language Ideology

    ERIC Educational Resources Information Center

    Lagos, Cristián; Espinoza, Marco; Rojas, Darío

    2013-01-01

    In this paper, we analyse the cultural models (or folk theory of language) that the Mapuche intellectual elite have about Mapudungun, the native language of the Mapuche people still spoken today in Chile as the major minority language. Our theoretical frame is folk linguistics and studies of language ideology, but we have also taken an applied…

  17. "We Communicated That Way for a Reason": Language Practices and Language Ideologies among Hearing Adults Whose Parents Are Deaf

    ERIC Educational Resources Information Center

    Pizer, Ginger; Walters, Keith; Meier, Richard P.

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing…

  18. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    ERIC Educational Resources Information Center

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  19. Britain's South Asian Languages.

    ERIC Educational Resources Information Center

    Mobbs, Michael

    This book focuses on the languages spoken by people of South Asian origin living in Britain and is intended to assist individuals in Britain whose work involves them with speakers of these languages. The approach taken is descriptive and practical, offering linguistic, geographic, and historical background information leading to appreciation of…

  20. The Potential of Elicited Imitation for Oral Output Practice in German L2

    ERIC Educational Resources Information Center

    Cornillie, Frederik; Baten, Kristof; De Hertog, Dirk

    2017-01-01

    This paper reports on the potential of Oral Elicited Imitation (OEI) as a format for output practice, building on an analysis of picture-matching and spoken data collected from 36 university-level learners of German as a second language (L2) in a web-based assessment task inspired by Input Processing (VanPatten, 2004). The design and development…

  1. The Land Remembers: Landscape Terms and Place Names in Lowland Chontal of Oaxaca, Mexico

    ERIC Educational Resources Information Center

    O'Connor, Loretta; Kroefges, Peter C.

    2008-01-01

    This paper examines landscape terminology and place names of the Chontal region in the state of Oaxaca in southern Mexico, with a focus on terms from Lowland Chontal, a highly endangered language spoken near the Pacific coast. In addition to the linguistic analysis, the paper presents a general description of the physical geography of the area and…

  2. The Association Between Positive Relationships with Adults and Suicide-Attempt Resilience in American Indian Youth in New Mexico.

    PubMed

    FitzGerald, Courtney A; Fullerton, Lynne; Green, Dan; Hall, Meryn; Peñaloza, Linda J

    2017-01-01

    This study examined the 2013 New Mexico Youth Risk and Resiliency Survey (NM-YRRS) to determine whether cultural connectedness and positive relationships with adults protected against suicide attempts among American Indian and Alaska Native (AI/AN) youth and whether these relationships differed by gender. The sample included 2,794 AI/AN students in grades 9 to 12 who answered the question about past-year suicide attempts. Protective factor variables tested included relationships with adults at home, school, and the community. The language spoken at home was used as a proxy measure for cultural connectedness. Positive relationships with adults were negatively associated with the prevalence of past-year suicide attempts in bivariate analysis. However, language spoken at home was not associated with the prevalence of suicide attempts. Multivariate analysis showed that among girls, relationships with adults at home, at school, and in the community were independently associated with lower suicide-attempt prevalence. Among boys, only relationships with adults at home showed such an association. These results have important implications for the direction of future research about protective factors associated with AI/AN youth suicide risk as well as in the design of suicide intervention and prevention programs.

  3. Songs to syntax: the linguistics of birdsong.

    PubMed

    Berwick, Robert C; Okanoya, Kazuo; Beckers, Gabriel J L; Bolhuis, Johan J

    2011-03-01

    Unlike our primate cousins, many species of bird share with humans a capacity for vocal learning, a crucial factor in speech acquisition. There are striking behavioural, neural and genetic similarities between auditory-vocal learning in birds and human infants. Recently, the linguistic parallels between birdsong and spoken language have begun to be investigated. Although both birdsong and human language are hierarchically organized according to particular syntactic constraints, birdsong structure is best characterized as 'phonological syntax', resembling aspects of human sound structure. Crucially, birdsong lacks semantics and words. Formal language and linguistic analysis remains essential for the proper characterization of birdsong as a model system for human speech and language, and for the study of the brain and cognition evolution. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Targeted Help for Spoken Dialogue Systems: Intelligent Feedback Improves Naive Users' Performance

    NASA Technical Reports Server (NTRS)

    Hockey, Beth Ann; Lemon, Oliver; Campana, Ellen; Hiatt, Laura; Aist, Gregory; Hieronymous, Jim; Gruenstein, Alexander; Dowding, John

    2003-01-01

    We present experimental evidence that providing naive users of a spoken dialogue system with immediate help messages related to their out-of-coverage utterances improves their success in using the system. A grammar-based recognizer and a Statistical Language Model (SLM) recognizer are run simultaneously. If the grammar-based recognizer suceeds, the less accurate SLM recognizer hypothesis is not used. When the grammar-based recognizer fails and the SLM recognizer produces a recognition hypothesis, this result is used by the Targeted Help agent to give the user feed-back on what was recognized, a diagnosis of what was problematic about the utterance, and a related in-coverage example. The in-coverage example is intended to encourage alignment between user inputs and the language model of the system. We report on controlled experiments on a spoken dialogue system for command and control of a simulated robotic helicopter.

  5. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    NASA Astrophysics Data System (ADS)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  6. Investigating L2 Spoken English through the Role Play Learner Corpus

    ERIC Educational Resources Information Center

    Nava, Andrea; Pedrazzini, Luciana

    2011-01-01

    We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…

  7. Pair Counting to Improve Grammar and Spoken Fluency

    ERIC Educational Resources Information Center

    Hanson, Stephanie

    2017-01-01

    English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…

  8. Lexicogrammar in the International Construction Industry: A Corpus-Based Case Study of Japanese-Hong-Kongese On-Site Interactions in English

    ERIC Educational Resources Information Center

    Handford, Michael; Matous, Petr

    2011-01-01

    The purpose of this research is to identify and interpret statistically significant lexicogrammatical items that are used in on-site spoken communication in the international construction industry, initially through comparisons with reference corpora of everyday spoken and business language. Several data sources, including audio and video…

  9. Automated Scoring of L2 Spoken English with Random Forests

    ERIC Educational Resources Information Center

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  10. Spoken Persuasive Discourse Abilities of Adolescents with Acquired Brain Injury

    ERIC Educational Resources Information Center

    Moran, Catherine; Kirk, Cecilia; Powell, Emma

    2012-01-01

    Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…

  11. Are Phonological Representations of Printed and Spoken Language Isomorphic? Evidence from the Restrictions on Unattested Onsets

    ERIC Educational Resources Information Center

    Berent, Iris

    2008-01-01

    Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether…

  12. A Comparison between Written and Spoken Narratives in Aphasia

    ERIC Educational Resources Information Center

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  13. “Down the Language Rabbit Hole with Alice”: A Case Study of a Deaf Girl with a Cochlear Implant

    PubMed Central

    Andrews, Jean F.; Dionne, Vickie

    2011-01-01

    Alice, a deaf girl who was implanted after age three years of age was exposed to four weeks of storybook sessions conducted in American Sign Language (ASL) and speech (English). Two research questions were address: (1) how did she use her sign bimodal/bilingualism, codeswitching, and code mixing during reading activities and (2) what sign bilingual code-switching and code-mixing strategies did she use while attending to stories delivered under two treatments: ASL only and speech only. Retelling scores were collected to determine the type and frequency of her codeswitching/codemixing strategies between both languages after Alice was read to a story in ASL and in spoken English. Qualitative descriptive methods were utilized. Teacher, clinician and student transcripts of the reading and retelling sessions were recorded. Results showed Alice frequently used codeswitching and codeswitching strategies while retelling the stories retold under both treatments. Alice increased in her speech production retellings of the stories under both the ASL storyreading and spoken English-only reading of the story. The ASL storyreading did not decrease Alice's retelling scores in spoken English. Professionals are encouraged to consider the benefits of early sign bimodal/bilingualism to enhance the overall speech, language and reading proficiency of deaf children with cochlear implants. PMID:22135677

  14. Lexical access in sign language: a computational model.

    PubMed

    Caselli, Naomi K; Cohen-Goldberg, Ariel M

    2014-01-01

    PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  15. Combinatorics and synchronization in natural semiotics

    NASA Astrophysics Data System (ADS)

    Orsucci, Franco; Giuliani, Alessandro; Webber, Charles; Zbilut, Joseph; Fonagy, Peter; Mazza, Marianna

    2006-03-01

    In this study the derivation of an objective metrics to appreciate the degree of structuring of written and spoken texts is presented. The proposed metrics is based on the scoring of recurrences inside a text by means of the application of recurrence quantification analysis (RQA), a nonlinear technique widely used in other fields of sciences. The adopted approach allowed us to create a ranking of different poems strictly related to their prosodic structure and, more importantly, the possibility to recognize the same structure across different languages, to define a level of structuring typical of spoken texts and identifying the progressive synchronization of a dyadic relation between two speakers in terms of relative complexity of their speeches. These results suggest the possibility of introducing objective measurement methods into humanities studies.

  16. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Real-Time Processing of ASL Signs: Delayed First Language Acquisition Affects Organization of the Mental Lexicon

    ERIC Educational Resources Information Center

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2015-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…

  18. The Arizona Home Language Survey: The Identification of Students for ELL Services

    ERIC Educational Resources Information Center

    Goldenberg, Claude; Rutherford-Quach, Sara

    2010-01-01

    Assuring that English language learners (ELLs) receive the services to which they have a right requires accurately identifying those students. Virtually all states identify ELLs in a two-step process. First, parents fill out a home language survey. Second, students in whose homes a language other than English is spoken and who therefore might…

  19. Type of Iconicity Matters in the Vocabulary Development of Signing Children

    ERIC Educational Resources Information Center

    Ortega, Gerardo; Sümer, Beyza; Özyürek, Asli

    2017-01-01

    Recent research on signed as well as spoken language shows that the iconic features of the target language might play a role in language development. Here, we ask further whether different types of iconic depictions modulate children's preferences for certain types of sign-referent links during vocabulary development in sign language. Results from…

  20. Influence of Linguistic Environment on Children's Language Development: Flemish versus Dutch Children

    ERIC Educational Resources Information Center

    Wiefferink, C. H.; Spaai, G. W. G.; Uilenburg, N.; Vermeij, B. A. M.; De Raeve, L.

    2008-01-01

    In the present study, language development of Dutch children with a cochlear implant (CI) in a bilingual educational setting and Flemish children with a CI in a dominantly monolingual educational setting is compared. In addition, we compared the development of spoken language with the development of sign language in Dutch children. Eighteen…

  1. How Facebook Can Revitalise Local Languages: Lessons from Bali

    ERIC Educational Resources Information Center

    Stern, Alissa Joy

    2017-01-01

    For a language to survive, it must be spoken and passed down to the next generation. But how can we engage teenagers--so crucial for language transmission--to use and value their local tongue when they are bombarded by pressures from outside and from within their society to only speak national and international languages? This paper analyses the…

  2. Mother Tongue versus Arabic: The Post-Independence Eritrean Language Policy Debate

    ERIC Educational Resources Information Center

    Mohammad, Abdulkader Saleh

    2016-01-01

    This paper analyses the controversial discourses around the significance of the Arabic language in Eritrea. It challenges the arguments of the government and some scholars, who claim that the Arabic language is alien to Eritrean society. They argue that it was introduced as an official language under British rule and is only spoken by the Rashaida…

  3. Sources of Difficulty in the Processing of Written Language. Report Series 4.3.

    ERIC Educational Resources Information Center

    Chafe, Wallace

    Ease of language processing varies with the nature of the language involved. Ordinary spoken language is the easiest kind to produce and understand, while writing is a relatively new development. On thoughtful inspection, the readability of writing has shown itself to be a complex topic requiring insights from many academic disciplines and…

  4. First Steps to Endangered Language Documentation: The Kalasha Language, a Case Study

    ERIC Educational Resources Information Center

    Mela-Athanasopoulou, Elizabeth

    2011-01-01

    The present paper based on extensive fieldwork D conducted on Kalasha, an endangered language spoken in the three small valleys in Chitral District of Northwestern Pakistan, exposes a spontaneous dialogue-based elicitation of linguistic material used for the description and documentation of the language. After a brief display of the basic typology…

  5. Blackfeet Language Survey.

    ERIC Educational Resources Information Center

    Boehmler, Eileen

    1979-01-01

    A survey is presented of the Blackfeet language that is used in the Browning area of Montana. The purpose of the survey is to determine the extent to which the language is spoken and passed on at home, and the degree of interest in the language among the young people. The results are presented along with comments where appropriate. Generally, it…

  6. A Grammar of Southern Pomo: An Indigenous Language of California

    ERIC Educational Resources Information Center

    Walker, Neil Alexander

    2013-01-01

    Southern Pomo is a moribund indigenous language, one of seven closely related Pomoan languages once spoken in Northern California in the vicinity of the Russian River drainage, Clear Lake, and the adjacent Pacific coast. This work is the first full-length grammar of the language. It is divided into three parts. Part I introduces the sociocultural…

  7. The Resilience of Structure Built around the Predicate: Homesign Gesture Systems in Turkish and American Deaf Children

    ERIC Educational Resources Information Center

    Goldin-Meadow, Susan; Namboodiripad, Savithry; Mylander, Carolyn; Özyürek, Asli; Sancar, Burcu

    2015-01-01

    Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called "homesigns", which have many of the properties of natural language--the so-called resilient properties of language. We explored the resilience of structure built…

  8. Listening to Accented Speech in a Second Language: First Language and Age of Acquisition Effects

    ERIC Educational Resources Information Center

    Larraza, Saioa; Samuel, Arthur G.; Oñederra, Miren Lourdes

    2016-01-01

    Bilingual speakers must acquire the phonemic inventory of 2 languages and need to recognize spoken words cross-linguistically; a demanding job potentially made even more difficult due to dialectal variation, an intrinsic property of speech. The present work examines how bilinguals perceive second language (L2) accented speech and where…

  9. Learning a Minoritized Language in a Majority Language Context: Student Agency and the Creation of Micro-Immersion Contexts

    ERIC Educational Resources Information Center

    DePalma, Renée

    2015-01-01

    This study investigates the self-reported experiences of students participating in a Galician language and culture course. Galician, a language historically spoken in northwestern Spain, has been losing ground with respect to Spanish, particularly in urban areas and among the younger generations. The research specifically focuses on informal…

  10. Prestige from the Bottom Up: A Review of Language Planning in Guernsey

    ERIC Educational Resources Information Center

    Sallabank, Julia

    2005-01-01

    This paper discusses language planning measures in Guernsey, Channel Islands. The indigenous language is spoken fluently by only 2% of the population, and is at level 7 on Fishman's 8-point scale of endangerment. It has no official status and low social prestige, and language planning has little official support or funding. Political autonomy has…

  11. Community Language Learning and Counseling-Learning. TEAL Occasional Papers, Vol. l, 1977.

    ERIC Educational Resources Information Center

    Soga, Lillian

    Community Language Learning (CLL) is a humanistic approach to learning which emphasizes the learner and learning rather than the teacher and teaching. In some situations where the teacher is not fluent in the various languages spoken by the students, such as in the English as a second language (ESL) classroom, advanced students may serve as…

  12. Finding Relevant Data in a Sea of Languages

    DTIC Science & Technology

    2016-04-26

    full machine-translated text , unbiased word clouds , query-biased word clouds , and query-biased sentence...and information retrieval to automate language processing tasks so that the limited number of linguists available for analyzing text and spoken...the crime (stock market). The Cross-LAnguage Search Engine (CLASE) has already preprocessed the documents, extracting text to identify the language

  13. Play to Learn: Self-Directed Home Language Literacy Acquisition through Online Games

    ERIC Educational Resources Information Center

    Eisenchlas, Susana A.; Schalley, Andrea C.; Moyes, Gordon

    2016-01-01

    Home language literacy education in Australia has been pursued predominantly through Community Language Schools. At present, some 1,000 of these, attended by over 100,000 school-age children, cater for 69 of the over 300 languages spoken in Australia. Despite good intentions, these schools face a number of challenges. For instance, children may…

  14. Multilingual Education in Macao

    ERIC Educational Resources Information Center

    Young, Ming Yee Carissa

    2009-01-01

    This paper focuses on the current use of the three written languages (Chinese, Portuguese and English) and the four spoken languages (Chinese-Cantonese, Chinese-Putonghua, Portuguese and English) in Macao, a former Portuguese colony (1557-1999) which is now a Special Administrative Region of China. Chinese and Portuguese are official languages,…

  15. Social Class and Language Attitudes in Hong Kong

    ERIC Educational Resources Information Center

    Lai, Mee Ling

    2010-01-01

    This article examines the relation between social class and language attitudes through a triangulated study that analyses the attitudes of 836 secondary school students from different socioeconomic backgrounds toward the 3 official spoken languages used in postcolonial Hong Kong (HK; i.e., Cantonese, English, and Putonghua). The respondents were…

  16. Digital Divide: Low German and Other Minority Languages

    ERIC Educational Resources Information Center

    Wiggers, Heiko

    2017-01-01

    This paper investigates the online presence of Low German, a minority language spoken in northern Germany, as well as several other European regional and minority languages. In particular, this article presents the results of two experiments, one involving "Wikipedia" and one involving "Twitter," that assess whether and to…

  17. Investigating Black ASL: A Systematic Review

    ERIC Educational Resources Information Center

    Toliver-Smith, Andrea; Gentry, Betholyn

    2017-01-01

    The authors reviewed the literature regarding linguistic variations seen in American Sign Language. These variations are influenced by region and culture. Features of spoken languages have also influenced sign languages as they intersected, e.g., Black ASL has been influenced by African American English. A literature review was conducted to…

  18. Verbicide.

    ERIC Educational Resources Information Center

    Orr, David W.

    2001-01-01

    A professor of conservation biology, and champion of the spoken and written word, discusses who is corrupting the English language and why, describing the culprits and suggesting how to remedy the situation (e.g., restore the habit of talking directly to one another; use proper language; hold those who are corrupting the language accountable; and…

  19. The Specificity of Sound Symbolic Correspondences in Spoken Language

    ERIC Educational Resources Information Center

    Tzeng, Christina Y.; Nygaard, Lynne C.; Namy, Laura L.

    2017-01-01

    Although language has long been regarded as a primarily arbitrary system, "sound symbolism," or non-arbitrary correspondences between the sound of a word and its meaning, also exists in natural language. Previous research suggests that listeners are sensitive to sound symbolism. However, little is known about the specificity of these…

  20. Perfecting Language: Experimenting with Vocabulary Learning

    ERIC Educational Resources Information Center

    Absalom, Matthew

    2014-01-01

    One of the thorniest aspects of teaching languages is developing students' vocabulary, yet it is impossible to be "an accurate and highly communicative language user with a very small vocabulary" (Milton, 2009, p. 3). Nation (2006) indicates that more vocabulary than previously thought is required to function well both at spoken and…

  1. Corpus-Based Optimization of Language Models Derived from Unification Grammars

    NASA Technical Reports Server (NTRS)

    Rayner, Manny; Hockey, Beth Ann; James, Frankie; Bratt, Harry; Bratt, Elizabeth O.; Gawron, Mark; Goldwater, Sharon; Dowding, John; Bhagat, Amrita

    2000-01-01

    We describe a technique which makes it feasible to improve the performance of a language model derived from a manually constructed unification grammar, using low-quality untranscribed speech data and a minimum of human annotation. The method is on a medium-vocabulary spoken language command and control task.

  2. Pronouns in Akebu

    ERIC Educational Resources Information Center

    Koffi, Yao

    2010-01-01

    (Purpose) The purpose of this article is to provide a detailed description of the pronouns in Akebu. Akebu is a language spoken in South-West Togo and in the neighboring towns in Ghana. Akebu belongs to a group of languages formerly called "Togo Remnant Languages", now (Ghana Togo Mountains, GTM). The native Akebu speakers call their…

  3. Competency: The Language of the Behavioral Objectives Movement.

    ERIC Educational Resources Information Center

    Craig, Samuel B., Jr.

    Several external and internal factors combine to hinder optimal communication in "Competency," the language of behavior modification. As a language, Competency a) is spoken with varying degrees of fluency and facility, b) is difficult to translate into English because the common vocabulary is used descriptively in English while it is…

  4. LANGUAGES OF THE WORLD--BOREO-ORIENTAL FASCICLE ONE.

    ERIC Educational Resources Information Center

    VOEGELIN, C. F.; AND OTHERS

    THIS REPORT LISTS AND DESCRIBES THE BOREO-ORIENTAL LANGUAGES WHICH INCLUDE ALL NON-CAUCASIAN, NON-INDO-EUROPEAN, AND NON-SINO-TIBETAN LANGUAGES SPOKEN BETWEEN THE LINE THAT SEPARATES EUROPE FROM ASIA AND THE NORTH PACIFIC OCEAN. (THE REPORT IS PART OF A SERIES, ED 010 350 TO ED 010 367.) (JK)

  5. 7 CFR 253.5 - State agency requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... households which speak the same non-English language and which do not contain adults(s) fluent in English as a second language. If the non-English language is spoken but not written, the State agency shall... sufficient bilingual staff for the timely processing of non-English speaking applicants. (3) The State agency...

  6. Mutual Intelligibility between Closely Related Languages in Europe

    ERIC Educational Resources Information Center

    Gooskens, Charlotte; van Heuven, Vincent J.; Golubovic, Jelena; Schüppert, Anja; Swarte, Femke; Voigt, Stefanie

    2018-01-01

    By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual intelligibility between young, educated Europeans…

  7. American Indian Studies in the Extinct Languages of Southeastern New England

    ERIC Educational Resources Information Center

    O'Brien, Frank Waabu

    2005-01-01

    This monograph contains 13 self-contained brief treatises that comprise material on linguistic, historical and cultural studies of the extinct American Indian languages of southeastern New England. These Indian languages, and their dialects, were once spoken principally in the States of Rhode Island and Massachusetts. They are called…

  8. Vocal Interaction between Children with Down syndrome and their Parents

    PubMed Central

    Thiemann-Bourque, Kathy S.; Warren, Steven F.; Brady, Nancy; Gilkerson, Jill; Richards, Jeffrey A.

    2014-01-01

    Purpose The purpose of this study was to describe differences in parent input and child vocal behaviors of children with Down syndrome (DS) compared to typically developing (TD) children. The goals were to describe the language learning environments at distinctly different ages in early childhood. Method Nine children with DS and 9 age-matched TD children participated; four children in each group were ages 9–11 months and five were between 25–54 months. Measures were derived from automated vocal analysis. A digital language processer measured the richness of the child’s language environment, including number of adult words, conversational turns, and child vocalizations. Results Analyses indicated no significant differences in words spoken by parents of younger vs. older children with DS, and significantly more words spoken by parents of TD children than parents of children with DS. Differences between the DS and TD groups were observed in rates of all vocal behaviors; with no differences noted between the younger vs. older children with DS, and the younger TD children did not vocalize significantly more than the younger DS children. Conclusions Parents of children with DS continue to provide consistent levels of input across the early language learning years; however, child vocal behaviors remain low after the age of 24 months suggesting the need for additional and alternative intervention approaches. PMID:24686777

  9. Cognitive Coordination on the Network Centric Battlefield

    DTIC Science & Technology

    2009-03-06

    access in spoken language comprehension: Evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic ...Research, Vol 29, 557-580 56 Trueswell, J. & Tanenhaus, M (eds.) (2004). World-situated language use: Psycholinguistic , linguistic, and computational

  10. Influences of indigenous language on spatial frames of reference in Aboriginal English

    NASA Astrophysics Data System (ADS)

    Edmonds-Wathen, Cris

    2014-06-01

    The Aboriginal English spoken by Indigenous children in remote communities in the Northern Territory of Australia is influenced by the home languages spoken by themselves and their families. This affects uses of spatial terms used in mathematics such as `in front' and `behind.' Speakers of the endangered Indigenous Australian language Iwaidja use the intrinsic frame of reference in contexts where speakers of Standard Australian English use the relative frame of reference. Children speaking Aboriginal English show patterns of use that parallel the Iwaidja contexts. This paper presents detailed examples of spatial descriptions in Iwaidja and Aboriginal English that demonstrate the parallel patterns of use. The data comes from a study that investigated how an understanding of spatial frame of reference in Iwaidja could assist teaching mathematics to Indigenous language-speaking students. Implications for teaching mathematics are explored for teachers without previous experience in a remote Indigenous community.

  11. Recognition of voice commands using adaptation of foreign language speech recognizer via selection of phonetic transcriptions

    NASA Astrophysics Data System (ADS)

    Maskeliunas, Rytis; Rudzionis, Vytautas

    2011-06-01

    In recent years various commercial speech recognizers have become available. These recognizers provide the possibility to develop applications incorporating various speech recognition techniques easily and quickly. All of these commercial recognizers are typically targeted to widely spoken languages having large market potential; however, it may be possible to adapt available commercial recognizers for use in environments where less widely spoken languages are used. Since most commercial recognition engines are closed systems the single avenue for the adaptation is to try set ways for the selection of proper phonetic transcription methods between the two languages. This paper deals with the methods to find the phonetic transcriptions for Lithuanian voice commands to be recognized using English speech engines. The experimental evaluation showed that it is possible to find phonetic transcriptions that will enable the recognition of Lithuanian voice commands with recognition accuracy of over 90%.

  12. Learning with a missing sense: what can we learn from the interaction of a deaf child with a turtle?

    PubMed

    Miller, Paul

    2009-01-01

    This case study reports on the progress of Navon, a 13-year-old boy with prelingual deafness, over a 3-month period following exposure to Logo, a computer programming language that visualizes specific programming commands by means of a virtual drawing tool called the Turtle. Despite an almost complete lack of skills in spoken and sign language, Navon made impressive progress in his programming skills, including acquisition of a notable active written vocabulary, which he learned to apply in a purposeful, rule-based manner. His achievements are discussed with reference to commonly held assumptions about the relationship between language and thought, in general, and the prerequisite of proper spoken language skills for the acquisition of reading and writing, in particular. Highlighted are the central principles responsible for Navon's unexpected cognitive and linguistic development, including the way it affected his social relations with peers and teachers.

  13. Spoken language achieves robustness and evolvability by exploiting degeneracy and neutrality.

    PubMed

    Winter, Bodo

    2014-10-01

    As with biological systems, spoken languages are strikingly robust against perturbations. This paper shows that languages achieve robustness in a way that is highly similar to many biological systems. For example, speech sounds are encoded via multiple acoustically diverse, temporally distributed and functionally redundant cues, characteristics that bear similarities to what biologists call "degeneracy". Speech is furthermore adequately characterized by neutrality, with many different tongue configurations leading to similar acoustic outputs, and different acoustic variants understood as the same by recipients. This highlights the presence of a large neutral network of acoustic neighbors for every speech sound. Such neutrality ensures that a steady backdrop of variation can be maintained without impeding communication, assuring that there is "fodder" for subsequent evolution. Thus, studying linguistic robustness is not only important for understanding how linguistic systems maintain their functioning upon the background of noise, but also for understanding the preconditions for language evolution. © 2014 WILEY Periodicals, Inc.

  14. Language Impairments in the Development of Sign: Do They Reside in a Specific Modality or Are They Modality-Independent Deficits?

    ERIC Educational Resources Information Center

    Woll, Bencie; Morgan, Gary

    2012-01-01

    Various theories of developmental language impairments have sought to explain these impairments in modality-specific ways--for example, that the language deficits in SLI or Down syndrome arise from impairments in auditory processing. Studies of signers with language impairments, especially those who are bilingual in a spoken language as well as a…

  15. Korean University EFL Student Perspectives of Smartphone Applications (Apps) as Tools for Language Learning: An Action Research Study

    ERIC Educational Resources Information Center

    Jackson, Bernedette S.

    2017-01-01

    Learning a second or foreign language may be a daunting task for anyone; however, learning a language that is vastly different from a person's native language can be extremely difficult. This is especially true in South Korea where English is taught and spoken as a foreign language. For Korean students, who typically study English from a young…

  16. The Stability and Validity of Automated Vocal Analysis in Preverbal Preschoolers With Autism Spectrum Disorder

    PubMed Central

    Woynaroski, Tiffany; Oller, D. Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul

    2017-01-01

    Theory and research suggest that vocal development predicts “useful speech” in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently “in development” and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. PMID:27459107

  17. Rates and Predictors of Professional Interpreting Provision for Patients With Limited English Proficiency in the Emergency Department and Inpatient Ward

    PubMed Central

    Ryan, Jennifer; Abbato, Samantha; Greer, Ristan; Vayne-Bossert, Petra; Good, Phillip

    2017-01-01

    The provision of professional interpreting services in the hospital setting decreases communication errors of clinical significance and improves clinical outcomes. A retrospective audit was conducted at a tertiary referral adult hospital in Brisbane, Australia. Of 20 563 admissions of patients presenting to the hospital emergency department (ED) and admitted to a ward during 2013-2014, 582 (2.8%) were identified as requiring interpreting services. In all, 19.8% of admissions were provided professional interpreting services in the ED, and 26.1% were provided on the ward. Patients were more likely to receive interpreting services in the ED if they were younger, spoke an Asian language, or used sign language. On the wards, using sign language was associated with 3 times odds of being provided an interpreter compared with other languages spoken. Characteristics of patients including their age and type of language spoken influence the clinician’s decision to engage a professional interpreter in both the ED and inpatient ward. PMID:29144184

  18. Electrophysiological correlates of cross-linguistic semantic integration in hearing signers: N400 and LPC.

    PubMed

    Zachau, Swantje; Korpilahti, Pirjo; Hämäläinen, Jarmo A; Ervast, Leena; Heinänen, Kaisu; Suominen, Kalervo; Lehtihalmes, Matti; Leppänen, Paavo H T

    2014-07-01

    We explored semantic integration mechanisms in native and non-native hearing users of sign language and non-signing controls. Event-related brain potentials (ERPs) were recorded while participants performed a semantic decision task for priming lexeme pairs. Pairs were presented either within speech or across speech and sign language. Target-related ERP responses were subjected to principal component analyses (PCA), and neurocognitive basis of semantic integration processes were assessed by analyzing the N400 and the late positive complex (LPC) components in response to spoken (auditory) and signed (visual) antonymic and unrelated targets. Semantically-related effects triggered across modalities would indicate a similar tight interconnection between the signers׳ two languages like that described for spoken language bilinguals. Remarkable structural similarity of the N400 and LPC components with varying group differences between the spoken and signed targets were found. The LPC was the dominant response. The controls׳ LPC differed from the LPC of the two signing groups. It was reduced to the auditory unrelated targets and was less frontal for all the visual targets. The visual LPC was more broadly distributed in native than non-native signers and was left-lateralized for the unrelated targets in the native hearing signers only. Semantic priming effects were found for the auditory N400 in all groups, but only native hearing signers revealed a clear N400 effect to the visual targets. Surprisingly, the non-native signers revealed no semantically-related processing effect to the visual targets reflected in the N400 or the LPC; instead they appeared to rely more on visual post-lexical analyzing stages than native signers. We conclude that native and non-native signers employed different processing strategies to integrate signed and spoken semantic content. It appeared that the signers׳ semantic processing system was affected by group-specific factors like language background and/or usage. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A Spoken Language Intervention for School-Aged Boys with fragile X Syndrome

    PubMed Central

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2015-01-01

    Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214

  20. Reader for Advanced Spoken Tamil, Parts 1 and 2.

    ERIC Educational Resources Information Center

    Schiffman, Harold F.

    Part 1 of this reader consists of transcriptions of five Tamil radio plays, with exercises, notes, and discussion. Part 2 is a synopsis grammar and a glossary. Both are intended for advanced students of Tamil who have had at least two years of instruction in the spoken language at the college level. The materials have been tested in classroom use…

  1. Error Awareness and Recovery in Conversational Spoken Language Interfaces

    DTIC Science & Technology

    2007-05-01

    portant step towards constructing autonomously self -improving systems. Furthermore, we developed a scalable, data-driven approach that allows a system...prob- lems in spoken dialog (as well as other interactive systems) and constitutes an important step towards building autonomously self -improving...implicitly-supervised learning approach is applicable to other problems, and represents an important step towards developing autonomous, self

  2. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2012-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…

  3. Basic Course in Uzbek. Indiana University Publications: Uralic and Altaic Series, Volume 59.

    ERIC Educational Resources Information Center

    Raun, Alo

    This work is a revised edition of the author's "Spoken Uzbek," originally written for class use at Indiana University in 1952-53. Comprised of 25 lesson units and five review units, the format follows the general outline of the earlier ACLS (American Council of Learned Societies) "Spoken Language" courses: basic sentences are presented in build-up…

  4. Computerized Archive and Dictionary of the Jaqimara Languages of South America.

    ERIC Educational Resources Information Center

    Hardman-de-Bautista, M. J.

    The three extant members of the Jaqi (Jaqimara) family, Aymara, Jaqaru and Kawki, are spoken by over one million people primarily in Peru and Bolivia, but earlier members of the Jaqimara family were probably spoken throughout the whole area of present-day Peru. This paper gives an outline of some of the salient structural features of these…

  5. Measuring social desirability across language and sex: A comparison of Marlowe-Crowne Social Desirability Scale factor structures in English and Mandarin Chinese in Malaysia.

    PubMed

    Kurz, A Solomon; Drescher, Christopher F; Chin, Eu Gene; Johnson, Laura R

    2016-06-01

    Malaysia is a Southeast Asian country in which multiple languages are prominently spoken, including English and Mandarin Chinese. As psychological science continues to develop within Malaysia, there is a need for psychometrically sound instruments that measure psychological phenomena in multiple languages. For example, assessment tools for measuring social desirability could be a useful addition in psychological assessments and research studies in a Malaysian context. This study examined the psychometric performance of the English and Mandarin Chinese versions of the Marlowe-Crowne Social Desirability Scale when used in Malaysia. Two hundred and eighty-three students (64% female; 83% Chinese, 9% Indian) from two college campuses completed the Marlowe-Crowne Social Desirability Scale in their language of choice (i.e., English or Mandarin Chinese). Proposed factor structures were compared with confirmatory factor analysis, and multiple indicators-multiple causes models were used to examine measurement invariance across language and sex. Factor analyses supported a two-factor structure (i.e., Attribution and Denial) for the measure. Invariance tests revealed the scale was invariant by sex, indicating that social desirability can be interpreted similarly across sex. The scale was partially invariant by language version, with some non-invariance observed within the Denial factor. Non-invariance may be related to differences in the English and Mandarin Chinese languages, as well as cultural differences. Directions for further research include examining the measurement of social desirability in other contexts where both English and Mandarin Chinese are spoken (i.e., China) and further examining the causes of non-invariance on specific items. © 2016 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  6. White Matter Microstructure Correlates of Narrative Production in Typically Developing Children and Children with High Functioning Autism

    PubMed Central

    Mills, Brian; Lai, Janie; Brown, Timothy T.; Erhart, Matthew; Halgren, Eric; Reilly, Judy; Dale, Anders; Appelbaum, Mark; Moses, Pamela

    2013-01-01

    This study investigated the relationship between white matter microstructure and the development of morphosyntax in a spoken narrative in typically developing children (TD) and in children with high functioning autism (HFA). Autism is characterized by language and communication impairments, yet the relationship between morphosyntactic development in spontaneous discourse contexts and neural development is not well understood in either this population or typical development. Diffusion tensor imaging (DTI) was used to assess multiple parameters of diffusivity as indicators of white matter tract integrity in language-related tracts in children between 6 and 13 years of age. Children were asked to spontaneously tell a story about at time when someone made them sad, mad, or angry. The story was evaluated for morphological accuracy and syntactic complexity. Analysis of the relationship between white matter microstructure and language performance in TD children showed that diffusivity correlated with morphosyntax production in the superior longitudinal fasciculus (SLF), a fiber tract traditionally associated with language. At the anatomical level, the HFA group showed abnormal diffusivity in the right inferior longitudinal fasciculus (ILF) relative to the TD group. Within the HFA group, children with greater white matter integrity in the right ILF displayed greater morphological accuracy during their spoken narrative. Overall, the current study shows an association between white matter structure in a traditional language pathway and narrative performance in TD children. In the autism group, associations were only found in the ILF, suggesting that during real world language use, children with HFA rely less on typical pathways and instead rely on alternative ventral pathways that possibly mediate visual elements of language. PMID:23810972

  7. Gesture, sign and language: The coming of age of sign language and gesture studies

    PubMed Central

    Goldin-Meadow, Susan; Brentari, Diane

    2016-01-01

    How does sign language compare to gesture, on the one hand, and to spoken language on the other? At one time, sign was viewed as nothing more than a system of pictorial gestures with no linguistic structure. More recently, researchers have argued that sign is no different from spoken language with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the last 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We come to the conclusion that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because, at the moment, it is difficult to tell where sign stops and where gesture begins, we suggest that sign should not be compared to speech alone, but should be compared to speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that making a distinction between sign (or speech) and gesture is essential to predict certain types of learning, and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture. PMID:26434499

  8. Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

    PubMed

    Pimperton, Hannah; Kreppner, Jana; Mahon, Merle; Stevenson, Jim; Terlektsi, Emmanouela; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin R

    This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant unique variance in any of the three language scores at 13-19 years beyond that accounted for by existing language scores at 6-10 years. Early confirmation accounted for significant unique variance in the expressive language information score at 13-19 years after adjusting for the corresponding score at 6-10 years (R change = 0.08, p = 0.03). This study found that while adolescent language scores were higher for deaf or hard of hearing teenagers exposed to UNHS and those who had their hearing loss confirmed by 9 months, these group differences were not significant within the whole sample. There was some evidence of a beneficial effect of early confirmation of hearing loss on relative expressive language gain from childhood to adolescence. Further examination of the effect of these variables on adolescent language outcomes in other cohorts would be valuable.

  9. How to Speak "Counselnese".

    ERIC Educational Resources Information Center

    Gladding, Samuel T.

    1983-01-01

    Examines the origin of "Counselnese", the limited professional language spoken by counselors. The 14 words in the language are defined and rules are outlined for improving communication with both counselors and noncounselors as well as those who don't speak acronyms. (Author/JAC)

  10. Deep bottleneck features for spoken language identification.

    PubMed

    Jiang, Bing; Song, Yan; Wei, Si; Liu, Jun-Hua; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  11. The Messiness of Language Socialization in Reading Groups: Participation in and Resistance to the Values of Essayist Literacy

    ERIC Educational Resources Information Center

    Poole, Deborah

    2008-01-01

    This paper focuses on the process of literacy socialization in several 5th grade reading groups. Through close analysis of spoken interaction, which centers on a heavily illustrated, non-fiction text, the paper proposes that these reading groups can be seen as complex sites of socialization to the values associated with essayist literacy (i.e.,…

  12. Is Poor Frequency Modulation Detection Linked to Literacy Problems? A Comparison of Specific Reading Disability and Mild to Moderate Sensorineural Hearing Loss

    ERIC Educational Resources Information Center

    Halliday, L. F.; Bishop, D. V. M.

    2006-01-01

    Specific reading disability (SRD) is now widely recognised as often being caused by phonological processing problems, affecting analysis of spoken as well as written language. According to one theoretical account, these phonological problems are due to low-level problems in auditory perception of dynamic acoustic cues. Evidence for this has come…

  13. A decline in prosocial language helps explain public disapproval of the US Congress

    PubMed Central

    Frimer, Jeremy A.; Aquino, Karl; Gebauer, Jochen E.; Zhu, Luke (Lei); Oakes, Harrison

    2015-01-01

    Talking about helping others makes a person seem warm and leads to social approval. This work examines the real world consequences of this basic, social-cognitive phenomenon by examining whether record-low levels of public approval of the US Congress may, in part, be a product of declining use of prosocial language during Congressional debates. A text analysis of all 124 million words spoken in the House of Representatives between 1996 and 2014 found that declining levels of prosocial language strongly predicted public disapproval of Congress 6 mo later. Warm, prosocial language still predicted public approval when removing the effects of societal and global factors (e.g., the September 11 attacks) and Congressional efficacy (e.g., passing bills), suggesting that prosocial language has an independent, direct effect on social approval. PMID:25964358

  14. The Future of Inuktitut in the Face of Majority Languages: Bilingualism or Language Shift?

    ERIC Educational Resources Information Center

    Allen, Shanley

    2007-01-01

    Inuktitut, the Eskimo language spoken in Eastern Canada, is one of the few Canadian indigenous languages with a strong chance of long-term survival because over 90% of Inuit children still learn Inuktitut from birth. In this paper I review existing literature on bilingual Inuit children to explore the prospects for the survival of Inuktitut given…

  15. African Language Resource Handbook: A Resource Handbook of the Eighty-two Highest Priority African Languages. Prepublication Edition.

    ERIC Educational Resources Information Center

    Dwyer, David J.; Yankee, Everyl

    A directory of the 82 African languages given high priority for instruction in the United States contains a profile for each language that includes its classification and where it is spoken, the number of speakers, dialect situation, usage, orthography status, and listings of related human and institutional resources for the purpose of…

  16. Ideologies, Struggles and Contradictions: An Account of Mothers Raising Their Children Bilingually in Luxembourgish and English in Great Britain

    ERIC Educational Resources Information Center

    Kirsch, Claudine

    2012-01-01

    Researchers have studied family language planning within bilingual family contexts but there is a dearth of studies that examine language planning of multilingual parents who raise their children in one of the world's lesser spoken languages. In this study I explore the ideologies and language planning of Luxembourgish mothers who are raising…

  17. VILLAGE--Virtual Immersive Language Learning and Gaming Environment: Immersion and Presence

    ERIC Educational Resources Information Center

    Wang, Yi Fei; Petrina, Stephen; Feng, Francis

    2017-01-01

    3D virtual worlds are promising for immersive learning in English as a Foreign Language (EFL). Unlike English as a Second Language (ESL), EFL typically takes place in the learners' home countries, and the potential of the language is limited by geography. Although learning contexts where English is spoken is important, in most EFL courses at the…

  18. The Ecology of Language in Classrooms at a University in Eastern Ukraine

    ERIC Educational Resources Information Center

    Tarnopolsky, Oleg B.; Goodman, Bridget A.

    2014-01-01

    Using an ecology of language framework, the purpose of this study was to examine the degree to which English as a medium of instruction (EMI) at a private university in eastern Ukraine allows for the use of Ukrainian, the state language, or Russian, the predominantly spoken language, in large cities in eastern Ukraine. Uses of English and Russian…

  19. Translingualism and Second Language Acquisition: Language Ideologies of Gaelic Medium Education Teachers in a Linguistically Liminal Setting

    ERIC Educational Resources Information Center

    Knipe, John

    2017-01-01

    Scottish Gaelic, among the nearly 7,000 languages spoken in the world today, is endangered. In the 1980s the Gaelic Medium Education (GME) movement emerged with an emphasis on teaching students all subjects via this ancient tongue with the hope of revitalizing the language. Concomitantly, many linguists have called for problematizing traditional…

  20. Understanding Individual Differences in Language Development across the School Years. Language and Speech Disorders

    ERIC Educational Resources Information Center

    Tomblin, J. Bruce, Ed.; Nippold, Marilyn A., Ed.

    2014-01-01

    This volume presents the findings of a large-scale study of individual differences in spoken (and heard) language development during the school years. The goal of the study was to investigate the degree to which language abilities at school entry were stable over time and influential in the child's overall success in important aspects of…

  1. Oral Communication in the Framework of Cognitive Fluency: Developing and Testing Spoken Russian within the TORFL System

    ERIC Educational Resources Information Center

    Sobolev, Olga; Nesterova, Tatiana

    2014-01-01

    Language testing and second language acquisition research are both concerned with proficiency in the second language; given this shared interest, the rapprochement between these two domains may prove revealing and productive not only in terms of teaching practices, but also in taking a wide view of language, ranging across cognition, society and…

  2. The Arizona Home Language Survey: The Under-Identification of Students for English Language Services

    ERIC Educational Resources Information Center

    Goldenberg, Claude; Rutherford-Quach, Sara

    2012-01-01

    Assuring that English learners (ELs) receive the support services to which they are entitled requires accurately identifying students who are limited in their English proficiency. As a first step in the identification process, students' parents fill out a home language survey. If the survey indicates a language other than English is spoken in the…

  3. The Influence of Teacher Power on English Language Learners' Self-Perceptions of Learner Empowerment

    ERIC Educational Resources Information Center

    Diaz, Abel; Cochran, Kathryn; Karlin, Nancy

    2016-01-01

    English language learners (ELL) are students with a primary language spoken other than English enrolled in U.S. educational settings. As ELL students take on the challenges of learning English and U.S. culture, they must also learn academic content. The expectation to succeed academically in a foreign culture and language, while learning to speak…

  4. Out of the Communist Frying Pan and into the EU Fire? Exploring the Case of Kashubian

    ERIC Educational Resources Information Center

    Nestor, Niamh; Hickey, Tina

    2009-01-01

    A language currently at the nexus of change is Kashubian (in Polish: "kaszubski"), a West Slavic language spoken in northern Poland in the province of Pomerania. Termed a "regional language" by the Polish government in preparation for the ratification of the European Charter for Regional or Minority Languages (signed in 2003…

  5. Examining the Role of Time and Language Type in Reading Development for English Language Learners

    ERIC Educational Resources Information Center

    Betts, Joseph; Bolt, Sara; Decker, Dawn; Muyskens, Paul; Marston, Doug

    2009-01-01

    The purpose of this study was to examine the development of English reading achievement among English Language Learners (ELLs) and to determine whether the time that an ELL's family was in the United States and the type of native language spoken affected their reading development. Participants were 300 third-grade ELLs from two different native…

  6. The Unchanging American Capacity in Languages Other than English: Speaking and Learning Languages Other than English, 2000-2008

    ERIC Educational Resources Information Center

    Rivers, William P.; Robinson, John P.

    2012-01-01

    We present results of 2006 and 2008 replications of the 2000 General Social Survey (GSS), which included nine questions on languages other than English (LOEs) spoken (Robinson, Rivers, & Brecht, 2006). In 2000, 26% claimed they could speak another language, with 10% saying they could speak it "very well." In 2000, foreign language…

  7. The Neural Correlates of Highly Iconic Structures and Topographic Discourse in French Sign Language as Observed in Six Hearing Native Signers

    ERIC Educational Resources Information Center

    Courtin, C.; Herve, P. -Y.; Petit, L.; Zago, L.; Vigneau, M.; Beaucousin, V.; Jobard, G.; Mazoyer, B.; Mellet, E.; Tzourio-Mazoyer, N.

    2010-01-01

    "Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and…

  8. Analyzing the Types of Discrimination in Turkish for Foreigners Books

    ERIC Educational Resources Information Center

    Agcihan, Ezgi; Gokce, Asiye Toker

    2018-01-01

    Textbooks might be one of the main resources of teachers even today however there have been many changes in education media and Technologies. Especially language course books used in foreign language teaching can be the basic source of learning when the target language is not spoken in the country where the language is thought. Thus, it can be…

  9. Written Language Impairments in Primary Progressive Aphasia: A Reflection of Damage to Central Semantic and Phonological Processes

    ERIC Educational Resources Information Center

    Henry, Maya L.; Beeson, Pelagie M.; Alexander, Gene E.; Rapcsak, Steven Z.

    2012-01-01

    Connectionist theories of language propose that written language deficits arise as a result of damage to semantic and phonological systems that also support spoken language production and comprehension, a view referred to as the "primary systems" hypothesis. The objective of the current study was to evaluate the primary systems account in a mixed…

  10. Writing Khoisan: Harmonized Orthographies for Development of Under-Researched and Marginalized Languages--The Case of Cua, Kua, and Tsua Dialect Continuum of Botswana

    ERIC Educational Resources Information Center

    Chebanne, Andy

    2016-01-01

    Khoisan languages are spoken by various culturally diverse communities of Southern Africa. These languages also present an important linguistic diversity. Some of Khoisan languages communities are generally under-researched, marginalized and experiencing sustained sociolinguistic forces that threaten them. For those that have been documented,…

  11. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    PubMed

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  12. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    PubMed

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  13. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    PubMed

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  14. The Sociolinguistics of Sign Languages.

    ERIC Educational Resources Information Center

    Lucas, Ceil, Ed.

    This collection of papers examines how sign languages are distributed around the world; what occurs when they come in contact with spoken and written languages, and how signers use them in a variety of situations. Each chapter introduces the key issues in a particular area of inquiry and provides a comprehensive review of the literature. The seven…

  15. Language Immersion in the Self-Study Mode E-Course

    ERIC Educational Resources Information Center

    Sobolev, Olga

    2016-01-01

    This paper assesses the efficiency of the "Language Immersion e-Course" developed at the London School of Economics and Political Science (LSE) Language Centre. The new self-study revision e-course, promoting students' proficiency in spoken and aural Russian through autonomous learning, is based on the Michel Thomas method, and is…

  16. Coaching Parents to Use Naturalistic Language and Communication Strategies

    ERIC Educational Resources Information Center

    Akamoglu, Yusuf; Dinnebeil, Laurie

    2017-01-01

    Naturalistic language and communication strategies (i.e., naturalistic teaching strategies) refer to practices that are used to promote the child's language and communication skills either through verbal (e.g., spoken words) or nonverbal (e.g., gestures, signs) interactions between an adult (e.g., parent, teacher) and a child. Use of naturalistic…

  17. Codeswitching Techniques: Evidence-Based Instructional Practices for the ASL/English Bilingual Classroom

    ERIC Educational Resources Information Center

    Andrews, Jean F.; Rusher, Melissa

    2010-01-01

    The authors present a perspective on emerging bilingual deaf students who are exposed to, learning, and developing two languages--American Sign Language (ASL) and English (spoken English, manually coded English, and English reading and writing). The authors suggest that though deaf children may lack proficiency or fluency in either language during…

  18. Learning English Language by Radio in Primary Schools in Kenya

    ERIC Educational Resources Information Center

    Odera, Florence Y.

    2011-01-01

    Radio is one of the most affordable educational technologies available for the use in education and development in developing countries. This article explores the use of school radio broadcast to assist teachers and pupils to learn and improve English language both written and spoken in Kenyan primary schools. English language occupies a central…

  19. With or without Semantic Mediation: Retrieval of Lexical Representations in Sign Production

    ERIC Educational Resources Information Center

    Navarrete, Eduardo; Caccaro, Arianna; Pavani, Francesco; Mahon, Bradford Z.; Peressotti, Francesca

    2015-01-01

    How are lexical representations retrieved during sign production? Similar to spoken languages, lexical representation in sign language must be accessed through semantics when naming pictures. However, it remains an open issue whether lexical representations in sign language can be accessed via routes that bypass semantics when retrieval is…

  20. Teaching Children To Read in the Second Language. Monographs on Bilingualism No. 1.

    ERIC Educational Resources Information Center

    Smith, Craig

    The guide offers practical ideas to bilingual parents wishing to teach and encourage English-language reading while their children are attending Japanese-medium primary schools in Japan. Parents are encouraged to analyze their home language environment, including both spoken and written English use. The author provides anecdotal accounts of his…

Top