Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study
ERIC Educational Resources Information Center
Moeller, Aleidine J.; Theiler, Janine
2014-01-01
Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…
Language and literacy development of deaf and hard-of-hearing children: successes and challenges.
Lederberg, Amy R; Schick, Brenda; Spencer, Patricia E
2013-01-01
Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to early identification/intervention, advanced technologies (e.g., cochlear implants), and perceptually accessible language models. DHH children develop sign language in a similar manner as hearing children develop spoken language, provided they are in a language-rich environment. This occurs naturally for DHH children of deaf parents, who constitute 5% of the deaf population. For DHH children of hearing parents, sign language development depends on the age that they are exposed to a perceptually accessible 1st language as well as the richness of input. Most DHH children are born to hearing families who have spoken language as a goal, and such development is now feasible for many children. Some DHH children develop spoken language in bilingual (sign-spoken language) contexts. For the majority of DHH children, spoken language development occurs in either auditory-only contexts or with sign supports. Although developmental trajectories of DHH children with hearing parents have improved with early identification and appropriate interventions, the majority of children are still delayed compared with hearing children. These DHH children show particular weaknesses in the development of grammar. Language deficits and differences have cascading effects in language-related areas of development, such as theory of mind and literacy development.
Automatic translation among spoken languages
NASA Technical Reports Server (NTRS)
Walter, Sharon M.; Costigan, Kelly
1994-01-01
The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.
Mani, Nivedita; Huettig, Falk
2014-10-01
Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
2012-11-01
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
ERIC Educational Resources Information Center
Rama, Pia; Sirri, Louah; Serres, Josette
2013-01-01
Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…
ERIC Educational Resources Information Center
Casey, Laura Baylot; Bicard, David F.
2009-01-01
Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…
The Listening and Spoken Language Data Repository: Design and Project Overview
ERIC Educational Resources Information Center
Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.
2018-01-01
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…
Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.
de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo
2016-10-01
Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bimodal Bilingual Language Development of Hearing Children of Deaf Parents
ERIC Educational Resources Information Center
Hofmann, Kristin; Chilla, Solveig
2015-01-01
Adopting a bimodal bilingual language acquisition model, this qualitative case study is the first in Germany to investigate the spoken and sign language development of hearing children of deaf adults (codas). The spoken language competence of six codas within the age range of 3;10 to 6;4 is assessed by a series of standardised tests (SETK 3-5,…
The Development of Spoken Language in Deaf Children: Explaining the Unexplained Variance.
ERIC Educational Resources Information Center
Musselman, Carol; Kircaali-Iftar, Gonul
1996-01-01
This study compared 20 young deaf children with either exceptionally good or exceptionally poor spoken language for their hearing loss, age, and intelligence. Factors associated with high performance included earlier use of binaural ear-level aids, better educated mothers, auditory/verbal or auditory/oral instruction, reliance on spoken language…
"Visual" Cortex Responds to Spoken Language in Blind Children.
Bedny, Marina; Richardson, Hilary; Saxe, Rebecca
2015-08-19
Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.
ERIC Educational Resources Information Center
Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony
2013-01-01
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…
Spoken Language Production in Young Adults: Examining Syntactic Complexity
ERIC Educational Resources Information Center
Nippold, Marilyn A.; Frantz-Kaspar, Megan W.; Vigeland, Laura M.
2017-01-01
Purpose: In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language…
Hall, Wyatte C
2017-05-01
A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.
Spoken language development in children following cochlear implantation.
Niparko, John K; Tobey, Emily A; Thal, Donna J; Eisenberg, Laurie S; Wang, Nae-Yuh; Quittner, Alexandra L; Fink, Nancy E
2010-04-21
Cochlear implantation is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe to profound sensorineural hearing loss (SNHL). To prospectively assess spoken language acquisition following cochlear implantation in young children. Prospective, longitudinal, and multidimensional assessment of spoken language development over a 3-year period in children who underwent cochlear implantation before 5 years of age (n = 188) from 6 US centers and hearing children of similar ages (n = 97) from 2 preschools recruited between November 2002 and December 2004. Follow-up completed between November 2005 and May 2008. Performance on measures of spoken language comprehension and expression (Reynell Developmental Language Scales). Children undergoing cochlear implantation showed greater improvement in spoken language performance (10.4; 95% confidence interval [CI], 9.6-11.2 points per year in comprehension; 8.4; 95% CI, 7.8-9.0 in expression) than would be predicted by their preimplantation baseline scores (5.4; 95% CI, 4.1-6.7, comprehension; 5.8; 95% CI, 4.6-7.0, expression), although mean scores were not restored to age-appropriate levels after 3 years. Younger age at cochlear implantation was associated with significantly steeper rate increases in comprehension (1.1; 95% CI, 0.5-1.7 points per year younger) and expression (1.0; 95% CI, 0.6-1.5 points per year younger). Similarly, each 1-year shorter history of hearing deficit was associated with steeper rate increases in comprehension (0.8; 95% CI, 0.2-1.2 points per year shorter) and expression (0.6; 95% CI, 0.2-1.0 points per year shorter). In multivariable analyses, greater residual hearing prior to cochlear implantation, higher ratings of parent-child interactions, and higher socioeconomic status were associated with greater rates of improvement in comprehension and expression. The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their preimplantation scores.
The Language Development of a Deaf Child with a Cochlear Implant
ERIC Educational Resources Information Center
Mouvet, Kimberley; Matthijs, Liesbeth; Loots, Gerrit; Taverniers, Miriam; Van Herreweghe, Mieke
2013-01-01
Hearing parents of deaf or partially deaf infants are confronted with the complex question of communication with their child. This question is complicated further by conflicting advice on how to address the child: in spoken language only, in spoken language supported by signs, or in signed language. This paper studies the linguistic environment…
Speech-Language Pathologists: Vital Listening and Spoken Language Professionals
ERIC Educational Resources Information Center
Houston, K. Todd; Perigoe, Christina B.
2010-01-01
Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…
Spoken Language Development in Oral Preschool Children with Permanent Childhood Deafness
ERIC Educational Resources Information Center
Sarant, Julia Z.; Holt, Colleen M.; Dowell, Richard C.; Rickards, Field W.
2009-01-01
This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were…
Modality and morphology: what we write may not be what we say.
Rapp, Brenda; Fischer-Baum, Simon; Miozzo, Michele
2015-06-01
Written language is an evolutionarily recent human invention; consequently, its neural substrates cannot be determined by the genetic code. How, then, does the brain incorporate skills of this type? One possibility is that written language is dependent on evolutionarily older skills, such as spoken language; another is that dedicated substrates develop with expertise. If written language does depend on spoken language, then acquired deficits of spoken and written language should necessarily co-occur. Alternatively, if at least some substrates are dedicated to written language, such deficits may doubly dissociate. We report on 5 individuals with aphasia, documenting a double dissociation in which the production of affixes (e.g., the -ing in jumping) is disrupted in writing but not speaking or vice versa. The findings reveal that written- and spoken-language systems are considerably independent from the standpoint of morpho-orthographic operations. Understanding this independence of the orthographic system in adults has implications for the education and rehabilitation of people with written-language deficits. © The Author(s) 2015.
Method for automatic measurement of second language speaking proficiency
NASA Astrophysics Data System (ADS)
Bernstein, Jared; Balogh, Jennifer
2005-04-01
Spoken language proficiency is intuitively related to effective and efficient communication in spoken interactions. However, it is difficult to derive a reliable estimate of spoken language proficiency by situated elicitation and evaluation of a person's communicative behavior. This paper describes the task structure and scoring logic of a group of fully automatic spoken language proficiency tests (for English, Spanish and Dutch) that are delivered via telephone or Internet. Test items are presented in spoken form and require a spoken response. Each test is automatically-scored and primarily based on short, decontextualized tasks that elicit integrated listening and speaking performances. The tests present several types of tasks to candidates, including sentence repetition, question answering, sentence construction, and story retelling. The spoken responses are scored according to the lexical content of the response and a set of acoustic base measures on segments, words and phrases, which are scaled with IRT methods or parametrically combined to optimize fit to human listener judgments. Most responses are isolated spoken phrases and sentences that are scored according to their linguistic content, their latency, and their fluency and pronunciation. The item development procedures and item norming are described.
Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J
2014-09-01
In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.
Spoken language skills and educational placement in Finnish children with cochlear implants.
Lonka, Eila; Hasan, Marja; Komulainen, Erkki
2011-01-01
This study reports the demographics, and the auditory and spoken language development as well as educational settings, for a total of 164 Finnish children with cochlear implants. Two questionnaires were employed: the first, concerning day care and educational placement, was filled in by professionals for rehabilitation guidance, and the second, evaluating language development (categories of auditory performance, spoken language skills, and main mode of communication), by speech and language therapists in audiology departments. Nearly half of the children were enrolled in normal kindergartens and 43% of school-aged children in mainstream schools. Categories of auditory performance were observed to grow in relation to age at cochlear implantation (p < 0.001) as well as in relation to proportional hearing age (p < 0.001). The composite scores for language development moved to more diversified ones in relation to increasing age at cochlear implantation and proportional hearing age (p < 0.001). Children without additional disorders outperformed those with additional disorders. The results indicate that the most favorable age for cochlear implantation could be earlier than 2. Compared to other children, spoken language evaluation scores of those with additional disabilities were significantly lower; however, these children showed gradual improvements in their auditory perception and language scores. Copyright © 2011 S. Karger AG, Basel.
Spoken word recognition by Latino children learning Spanish as their first language*
HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE
2010-01-01
Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157
Infant perceptual development for faces and spoken words: An integrated approach
Watson, Tamara L; Robbins, Rachel A; Best, Catherine T
2014-01-01
There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626
Early Sign Language Exposure and Cochlear Implantation Benefits.
Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S
2017-07-01
Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.
System in Black Language. Multilingual Matters Series: 77.
ERIC Educational Resources Information Center
Sutcliffe, David; Figueroa, John
An examination of pattern in certain languages spoken primarily by Blacks has both a narrow and a broad focus. The former is on structure and development of the creole spoken by Jamaicans in England and to a lesser extent, a Black country English. The broader focus is on the relationship between the Kwa languages of West Africa and the…
ERIC Educational Resources Information Center
Medwetsky, Larry
2011-01-01
Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…
Weismer, Susan Ellis
2015-01-01
Purpose Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. Method The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. Results The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. Conclusion The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth. PMID:26254475
ERIC Educational Resources Information Center
Farfan, Jose Antonio Flores
Even though Nahuatl is the most widely spoken indigenous language in Mexico, it is endangered. Threats include poor support for Nahuatl-speaking communities, migration of Nahuatl speakers to cities where English and Spanish are spoken, prejudicial attitudes toward indigenous languages, lack of contact between small communities of different…
ERIC Educational Resources Information Center
Taha, Haitham
2017-01-01
The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
2015-01-01
Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070
Direction Asymmetries in Spoken and Signed Language Interpreting
ERIC Educational Resources Information Center
Nicodemus, Brenda; Emmorey, Karen
2013-01-01
Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…
Spoken Language Development in Children Following Cochlear Implantation
Niparko, John K.; Tobey, Emily A.; Thal, Donna J.; Eisenberg, Laurie S.; Wang, Nae-Yuh; Quittner, Alexandra L.; Fink, Nancy E.
2010-01-01
Context Cochlear implantation (CI) is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe-to-profound sensorineural hearing loss (SNHL). Objective To prospectively assess spoken language acquisition following CI in young children with adjustment of co-variates. Design, Setting, and Participants Prospective, longitudinal, and multidimensional assessment of spoken language growth over a 3-year period following CI. Prospective cohort study of children who underwent CI before 5 years of age (n=188) from 6 US centers and hearing children of similar ages (n=97) from 2 preschools recruited between November, 2002 and December, 2004. Follow-up completed between November, 2005 and May, 2008. Main Outcome Measures Performance on measures of spoken language comprehension and expression. Results Children undergoing CI showed greater growth in spoken language performance (10.4;[95% confidence interval: 9.6–11.2] points/year in comprehension; 8.4;[7.8–9.0] in expression) than would be predicted by their pre-CI baseline scores (5.4;[4.1–6.7] comprehension; 5.8;[4.6–7.0] expression). Although mean scores were not restored to age-appropriate levels after 3 years, significantly greater annual rates of language acquisition were observed in children who were younger at CI (1.1;[0.5–1.7] points in comprehension per year younger; 1.0;[0.6–1.5] in expression), and in children with shorter histories of hearing deficit (0.8;[0.2,1.2] points in comprehension per year shorter; 0.6;[0.2–1.0] for expression). In multivariable analyses, greater residual hearing prior to CI, higher ratings of parent-child interactions, and higher SES associated with greater rates of growth in comprehension and expression. Conclusions The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their pre-implantation scores. However, discrepancies between participants’ chronologic and language age persisted after CI, underscoring the importance of early CI in appropriately selected candidates. PMID:20407059
Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul
2010-01-01
Purpose The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child’s language level in order to establish a framework for comparing outcomes across intervention studies. Method The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. Results The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children’s spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age. PMID:19380608
ERIC Educational Resources Information Center
Crowe, Kathryn; McLeod, Sharynne
2016-01-01
The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…
Spoken Grammar Awareness Raising: Does It Affect the Listening Ability of Iranian EFL Learners?
ERIC Educational Resources Information Center
Rashtchi, Mojgan; Afzali, Mahnaz
2011-01-01
Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and…
Spoken language outcomes after hemispherectomy: factoring in etiology.
Curtiss, S; de Bode, S; Mathern, G W
2001-12-01
We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.
Simultaneous Communication Supports Learning in Noise by Cochlear Implant Users
Blom, Helen C.; Marschark, Marc; Machmer, Elizabeth
2017-01-01
Objectives This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant users in noisy contexts. Methods Forty eight college students who were active cochlear implant users, watched videos of three short presentations, the text versions of which were standardized at the 8th grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants’ spoken language and sign language skills were obtained via self-reports and objective assessments. Results When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants’ receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Discussion Students who are cochlear implant users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those cochlear implant users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Conclusion Accompanying spoken language with signs can benefit learners who are cochlear implant users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined. PMID:28010675
Simultaneous communication supports learning in noise by cochlear implant users.
Blom, Helen; Marschark, Marc; Machmer, Elizabeth
2017-01-01
This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant (CI) users in noisy contexts. Forty eight college students who were active CI users, watched videos of three short presentations, the text versions of which were standardized at the 8 th -grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants' spoken language and sign language skills were obtained via self-reports and objective assessments. When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants' receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Students who are CI users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those CI users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Accompanying spoken language with signs can benefit learners who are CI users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined.
ERIC Educational Resources Information Center
Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul
2009-01-01
Purpose: The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken…
Schiff, Rachel; Saiegh-Haddad, Elinor
2018-01-01
This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children's phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children's early morphological awareness in SpA explained variance in children's gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.
Schiff, Rachel; Saiegh-Haddad, Elinor
2018-01-01
This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633
Brain Bases of Morphological Processing in Young Children
Arredondo, Maria M.; Ip, Ka I; Hsu, Lucy Shih-Ju; Tardif, Twila; Kovelman, Ioulia
2017-01-01
How does the developing brain support the transition from spoken language to print? Two spoken language abilities form the initial base of child literacy across languages: knowledge of language sounds (phonology) and knowledge of the smallest units that carry meaning (morphology). While phonology has received much attention from the field, the brain mechanisms that support morphological competence for learning to read remain largely unknown. In the present study, young English-speaking children completed an auditory morphological awareness task behaviorally (n = 69, ages 6–12) and in fMRI (n = 16). The data revealed two findings: First, children with better morphological abilities showed greater activation in left temporo-parietal regions previously thought to be important for supporting phonological reading skills, suggesting that this region supports multiple language abilities for successful reading acquisition. Second, children showed activation in left frontal regions previously found active in young Chinese readers, suggesting morphological processes for reading acquisition might be similar across languages. These findings offer new insights for developing a comprehensive model of how spoken language abilities support children’s reading acquisition across languages. PMID:25930011
Acquisition of graphic communication by a young girl without comprehension of spoken language.
von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R
To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.
On-Line Orthographic Influences on Spoken Language in a Semantic Task
ERIC Educational Resources Information Center
Pattamadilok, Chotiga; Perre, Laetitia; Dufau, Stephane; Ziegler, Johannes C.
2009-01-01
Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a…
Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David
2013-12-06
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity/type of hearing disorder, age of identification, and type of hearing technology. This review will provide evidence on the effectiveness of using sign language in combination with oral language therapies for developing spoken language in children with hearing loss who are identified at a young age. The information from this review can provide guidance to parents and intervention specialists, inform policy decisions and provide directions for future research. CRD42013005426.
ERIC Educational Resources Information Center
Dang, Thi Ngoc Yen; Coxhead, Averil; Webb, Stuart
2017-01-01
The linguistic features of academic spoken English are different from those of academic written English. Therefore, for this study, an Academic Spoken Word List (ASWL) was developed and validated to help second language (L2) learners enhance their comprehension of academic speech in English-medium universities. The ASWL contains 1,741 word…
Yoshinaga-Itano, Christine; Wiggin, Mallene
2016-11-01
Hearing is essential for the development of speech, spoken language, and listening skills. Children previously went undiagnosed with hearing loss until they were 2.5 or 3 years of age. The auditory deprivation during this critical period of development significantly impacted long-term listening and spoken language outcomes. Due to the advent of universal newborn hearing screening, the average age of diagnosis has dropped to the first few months of life, which sets the stage for outcomes that include children with speech, spoken language, and auditory skill testing in the normal range. However, our work is not finished. The future holds even greater possibilities for children with hearing loss. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Professional Training in Listening and Spoken Language--A Canadian Perspective
ERIC Educational Resources Information Center
Fitzpatrick, Elizabeth
2010-01-01
Several factors undoubtedly influenced the development of listening and spoken language options for children with hearing loss in Canada. The concept of providing auditory-based rehabilitation was popularized in Canada in the 1960s through the work of Drs. Daniel Ling and Agnes Ling in Montreal. The Lings founded the McGill University Project for…
Lexical Processing in Spanish Sign Language (LSE)
ERIC Educational Resources Information Center
Carreiras, Manuel; Gutierrez-Sigut, Eva; Baquero, Silvia; Corina, David
2008-01-01
Lexical access is concerned with how the spoken or visual input of language is projected onto the mental representations of lexical forms. To date, most theories of lexical access have been based almost exclusively on studies of spoken languages and/or orthographic representations of spoken languages. Relatively few studies have examined how…
Scaling laws and model of words organization in spoken and written language
NASA Astrophysics Data System (ADS)
Bian, Chunhua; Lin, Ruokuang; Zhang, Xiaoyu; Ma, Qianli D. Y.; Ivanov, Plamen Ch.
2016-01-01
A broad range of complex physical and biological systems exhibits scaling laws. The human language is a complex system of words organization. Studies of written texts have revealed intriguing scaling laws that characterize the frequency of words occurrence, rank of words, and growth in the number of distinct words with text length. While studies have predominantly focused on the language system in its written form, such as books, little attention is given to the structure of spoken language. Here we investigate a database of spoken language transcripts and written texts, and we uncover that words organization in both spoken language and written texts exhibits scaling laws, although with different crossover regimes and scaling exponents. We propose a model that provides insight into words organization in spoken language and written texts, and successfully accounts for all scaling laws empirically observed in both language forms.
Production Is Only Half the Story - First Words in Two East African Languages.
Alcock, Katherine J
2017-01-01
Theories of early learning of nouns in children's vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8-20 months were interviewed using Communicative Development Inventories that assess infants' first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75-95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children's spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children's comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language.
Production Is Only Half the Story — First Words in Two East African Languages
Alcock, Katherine J.
2017-01-01
Theories of early learning of nouns in children’s vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8–20 months were interviewed using Communicative Development Inventories that assess infants’ first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75–95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children’s spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children’s comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language. PMID:29163280
ERIC Educational Resources Information Center
Nicholas, Johanna Grant; Geers, Ann E.
2007-01-01
Purpose: The authors examined the benefits of younger cochlear implantation, longer cochlear implant use, and greater pre-implant aided hearing to spoken language at 3.5 and 4.5 years of age. Method: Language samples were obtained at ages 3.5 and 4.5 years from 76 children who received an implant by their 3rd birthday. Hierarchical linear modeling…
Why Oracy Must Be in the Curriculum (and Group Work in the Classroom)
ERIC Educational Resources Information Center
Mercer, Neil
2015-01-01
In this article it is argued that the development of young people's skills in using spoken language should be given more time and attention in the school curriculum. The author discusses the importance of the effective use of spoken language in educational and work settings, considers what research has told us about the factors that make group…
ERIC Educational Resources Information Center
Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José
2015-01-01
Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…
Effects of early auditory experience on the spoken language of deaf children at 3 years of age.
Nicholas, Johanna Grant; Geers, Ann E
2006-06-01
By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44 mo of age). Examination of the independent influence of these predictors through multiple regression analysis revealed that pre-implant-aided PTA threshold and duration of cochlear implant use (i.e., age at implant) accounted for 58% of the variance in Language Factor scores. A significant negative coefficient associated with pre-implant-aided threshold indicated that children with poorer hearing before implantation exhibited poorer language skills at age 3.5 yr. Likewise, a strong positive coefficient associated with duration of implant use indicated that children who had used their implant for a longer period of time (i.e., who were implanted at an earlier age) exhibited better language at age 3.5 yr. Age at identification and amplification was unrelated to language outcome, as was aided threshold with the cochlear implant. A significant quadratic trend in the relation between duration of implant use and language score revealed a steady increase in language skill (at age 3.5 yr) for each additional month of use of a cochlear implant after the first 12 mo of implant use. The advantage to language of longer implant use became more pronounced over time. Longer use of a cochlear implant in infancy and very early childhood dramatically affects the amount of spoken language exhibited by 3-yr-old, profoundly deaf children. In this sample, the amount of pre-implant intervention with a hearing aid was not related to language outcome at 3.5 yr of age. Rather, it was cochlear implantation at a younger age that served to promote spoken language competence. The previously identified language-facilitating factors of early identification of hearing impairment and early educational intervention may not be sufficient for optimizing spoken language of profoundly deaf children unless it leads to early cochlear implantation.
Writing Signed Languages: What For? What Form?
Grushkin, Donald A
2017-01-01
Signed languages around the world have tended to maintain an "oral," unwritten status. Despite the advantages of possessing a written form of their language, signed language communities typically resist and reject attempts to create such written forms. The present article addresses many of the arguments against written forms of signed languages, and presents the potential advantages of writing signed languages. Following a history of the development of writing in spoken as well as signed language populations, the effects of orthographic types upon literacy and biliteracy are explored. Attempts at writing signed languages have followed two primary paths: "alphabetic" and "icono-graphic." It is argued that for greatest congruency and ease in developing biliteracy strategies in societies where an alphabetic script is used for the spoken language, signed language communities within these societies are best served by adoption of an alphabetic script for writing their signed language.
Worsfold, Sarah; Mahon, Merle; Pimperton, Hannah; Stevenson, Jim; Kennedy, Colin
2018-06-01
Deaf and hard of hearing (D/HH) children and young people are known to show group-level deficits in spoken language and reading abilities relative to their hearing peers. However, there is little evidence on the longitudinal predictive relationships between language and reading in this population. To determine the extent to which differences in spoken language ability in childhood predict reading ability in D/HH adolescents. and procedures: Participants were drawn from a population-based cohort study and comprised 53 D/HH teenagers, who used spoken language, and a comparison group of 38 normally hearing teenagers. All had completed standardised measures of spoken language (expression and comprehension) and reading (accuracy and comprehension) at 6-10 and 13-19 years of age. and results: Forced entry stepwise regression showed that, after taking reading ability at age 8 years into account, language scores at age 8 years did not add significantly to the prediction of Reading Accuracy z-scores at age 17 years (change in R 2 = 0.01, p = .459) but did make a significant contribution to the prediction of Reading Comprehension z-scores at age 17 years (change in R 2 = 0.17, p < .001). and implications: In D/HH individuals who are spoken language users, expressive and receptive language skills in middle childhood predict reading comprehension ability in adolescence. Continued intervention to support language development beyond primary school has the potential to benefit reading comprehension and hence educational access for D/HH adolescents. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.
Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung
2016-01-01
Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.
Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts
ERIC Educational Resources Information Center
Office of English Language Acquisition, US Department of Education, 2015
2015-01-01
The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…
The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing
ERIC Educational Resources Information Center
Gow, David W., Jr.
2012-01-01
Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…
ERIC Educational Resources Information Center
van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
2011-01-01
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…
ERIC Educational Resources Information Center
Verhoeven, Ludo; Steenge, Judit; van Leeuwe, Jan; van Balkom, Hans
2017-01-01
In this study, we investigated which componential skills can be distinguished in the second language (L2) development of 140 bilingual children with specific language impairment in the Netherlands, aged 6-11 years, divided into 3 age groups. L2 development was assessed by means of spoken language tasks representing different language skills…
Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen
2016-02-01
To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged < 4.5 years, hearing loss of ≥ 30 dB, between 500 and 4000 Hz, using at least one amplification device with adequate amplification (hearing aid, cochlear implant, osseo-integrated device). Behavioral. Caregiver-directed educational intervention curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.
Auditory Technology and Its Impact on Bilingual Deaf Education
ERIC Educational Resources Information Center
Mertes, Jennifer
2015-01-01
Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…
Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E
2012-04-01
Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.
Language and Culture in the Multi-Ethnic Community: Spoken-Language Assessment
ERIC Educational Resources Information Center
Matluck, Joseph H.; Mace-Matluck, Betty J.
1975-01-01
Describes the research approach used to develop the MAT-SEA-CAL Oral Proficiency tests designed by the authors. Language test performance depends on both language proficiency and knowledge of the culture. (TL)
Rämä, Pia; Sirri, Louah; Serres, Josette
2013-04-01
Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.
Strandroos, Lisa; Antelius, Eleonor
2017-09-01
Previous research concerning bilingual people with a dementia disease has mainly focused on the importance of sharing a spoken language with caregivers. While acknowledging this, this article addresses the multidimensional character of communication and interaction. As using spoken language is made difficult as a consequence of the dementia disease, this multidimensionality becomes particularly important. The article is based on a qualitative analysis of ethnographic fieldwork at a dementia care facility. It presents ethnographic examples of different communicative forms, with particular focus on bilingual interactions. Interaction is understood as a collective and collaborative activity. The text finds that a shared spoken language is advantageous, but is not the only source of, nor a guarantee for, creating common ground and understanding. Communicative resources other than spoken language are for example body language, embodiment, artefacts and time. Furthermore, forms of communication are not static but develop, change and are created over time. Ability to communicate is thus not something that one has or has not, but is situationally and collaboratively created. To facilitate this, time and familiarity are central resources, and the results indicate the importance of continuity in interpersonal relations.
Language and Literacy Development of Deaf and Hard-of-Hearing Children: Successes and Challenges
ERIC Educational Resources Information Center
Lederberg, Amy R.; Schick, Brenda; Spencer, Patricia E.
2013-01-01
Childhood hearing loss presents challenges to language development, especially spoken language. In this article, we review existing literature on deaf and hard-of-hearing (DHH) children's patterns and trajectories of language as well as development of theory of mind and literacy. Individual trajectories vary significantly, reflecting access to…
Research on Spoken Dialogue Systems
NASA Technical Reports Server (NTRS)
Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel
2010-01-01
Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.
ERIC Educational Resources Information Center
Wiefferink, C. H.; Spaai, G. W. G.; Uilenburg, N.; Vermeij, B. A. M.; De Raeve, L.
2008-01-01
In the present study, language development of Dutch children with a cochlear implant (CI) in a bilingual educational setting and Flemish children with a CI in a dominantly monolingual educational setting is compared. In addition, we compared the development of spoken language with the development of sign language in Dutch children. Eighteen…
Kanto, Laura; Huttunen, Kerttu; Laakso, Marja-Leena
2013-04-01
We explored variation in the linguistic environments of hearing children of Deaf parents and how it was associated with their early bilingual language development. For that purpose we followed up the children's productive vocabulary (measured with the MCDI; MacArthur Communicative Development Inventory) and syntactic complexity (measured with the MLU10; mean length of the 10 longest utterances the child produced during videorecorded play sessions) in both Finnish Sign Language and spoken Finnish between the ages of 12 and 30 months. Additionally, we developed new methodology for describing the linguistic environments of the children (N = 10). Large variation was uncovered in both the amount and type of language input and language acquisition among the children. Language exposure and increases in productive vocabulary and syntactic complexity were interconnected. Language acquisition was found to be more dependent on the amount of exposure in sign language than in spoken language. This was judged to be related to the status of sign language as a minority language. The results are discussed in terms of parents' language choices, family dynamics in Deaf-parented families and optimal conditions for bilingual development.
ERIC Educational Resources Information Center
Hampton, L. H.; Kaiser, A. P.
2016-01-01
Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…
Horn, David L; Pisoni, David B; Miyamoto, Richard T
2006-08-01
The objective of this study was to assess relations between fine and gross motor development and spoken language processing skills in pediatric cochlear implant users. The authors conducted a retrospective analysis of longitudinal data. Prelingually deaf children who received a cochlear implant before age 5 and had no known developmental delay or cognitive impairment were included in the study. Fine and gross motor development were assessed before implantation using the Vineland Adaptive Behavioral Scales, a standardized parental report of adaptive behavior. Fine and gross motor scores reflected a given child's motor functioning with respect to a normative sample of typically developing, normal-hearing children. Relations between these preimplant scores and postimplant spoken language outcomes were assessed. In general, gross motor scores were found to be positively related to chronologic age, whereas the opposite trend was observed for fine motor scores. Fine motor scores were more strongly correlated with postimplant expressive and receptive language scores than gross motor scores. Our findings suggest a disassociation between fine and gross motor development in prelingually deaf children: fine motor skills, in contrast to gross motor skills, tend to be delayed as the prelingually deaf children get older. These findings provide new knowledge about the links between motor and spoken language development and suggest that auditory deprivation may lead to atypical development of certain motor and language skills that share common cortical processing resources.
Unit 802: Language Varies with Approach.
ERIC Educational Resources Information Center
Minnesota Univ., Minneapolis. Center for Curriculum Development in English.
This eighth-grade language unit stresses developing the student's sensitivity to variations in language, primarily the similarities and differences between spoken and written language. Through sample lectures and discussion questions, the students are helped to form generalizations about language: that speech is the primary form of language; that…
Type of Iconicity Matters in the Vocabulary Development of Signing Children
ERIC Educational Resources Information Center
Ortega, Gerardo; Sümer, Beyza; Özyürek, Asli
2017-01-01
Recent research on signed as well as spoken language shows that the iconic features of the target language might play a role in language development. Here, we ask further whether different types of iconic depictions modulate children's preferences for certain types of sign-referent links during vocabulary development in sign language. Results from…
ERIC Educational Resources Information Center
Goldberg, Donald M.; Dickson, Cheryl L.; Flexer, Carol
2010-01-01
This article discusses the AG Bell Academy for Listening and Spoken Language--an organization designed to build capacity of certified Listening and Spoken Language Specialists (LSLS) by defining and maintaining a set of professional standards for LSLS professionals and thereby addressing the global deficit of qualified LSLS. Definitions and…
Grammar Is a System That Characterizes Talk in Interaction
Ginzburg, Jonathan; Poesio, Massimo
2016-01-01
Much of contemporary mainstream formal grammar theory is unable to provide analyses for language as it occurs in actual spoken interaction. Its analyses are developed for a cleaned up version of language which omits the disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language. Using evidence from linguistics, conversation analysis, multimodal communication, psychology, language acquisition, and neuroscience, we show these aspects of language use are rule governed in much the same way as phenomena captured by conventional grammars. Furthermore, we argue that over the past few years some of the tools required to provide a precise characterizations of such phenomena have begun to emerge in theoretical and computational linguistics; hence, there is no reason for treating them as “second class citizens” other than pre-theoretical assumptions about what should fall under the purview of grammar. Finally, we suggest that grammar formalisms covering such phenomena would provide a better foundation not just for linguistic analysis of face-to-face interaction, but also for sister disciplines, such as research on spoken dialogue systems and/or psychological work on language acquisition. PMID:28066279
Language as a multimodal phenomenon: implications for language learning, processing and evolution
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
2014-01-01
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. PMID:25092660
Crume, Peter K
2013-10-01
The National Reading Panel emphasizes that spoken language phonological awareness (PA) developed at home and school can lead to improvements in reading performance in young children. However, research indicates that many deaf children are good readers even though they have limited spoken language PA. Is it possible that some deaf students benefit from teachers who promote sign language PA instead? The purpose of this qualitative study is to examine teachers' beliefs and instructional practices related to sign language PA. A thematic analysis is conducted on 10 participant interviews at an ASL/English bilingual school for the deaf to understand their views and instructional practices. The findings reveal that the participants had strong beliefs in developing students' structural knowledge of signs and used a variety of instructional strategies to build students' knowledge of sign structures in order to promote their language and literacy skills.
Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.
Douglas, Michael
2016-02-01
To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently and significantly improve the achievement of children with hearing loss in spoken language skills.
Gesture production and comprehension in children with specific language impairment.
Botting, Nicola; Riches, Nicholas; Gaynor, Marguerite; Morgan, Gary
2010-03-01
Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups.
The road to language learning is iconic: evidence from British Sign Language.
Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella
2012-12-01
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Using Language Sample Analysis to Assess Spoken Language Production in Adolescents
ERIC Educational Resources Information Center
Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann
2016-01-01
Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…
A Platform for Multilingual Research in Spoken Dialogue Systems
2000-08-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010384 TITLE: A Platform for Multilingual Research in Spoken Dialogue...Ronald A. Cole*, Ben Serridge§, John-Paul Hosomý, Andrew Cronke, and Ed Kaiser* *Center for Spoken Language Understanding ; University of Colorado...Boulder; Boulder, CO, 80309, USA §Universidad de las Americas; 72820 Santa Catarina Martir; Puebla, Mexico *Center for Spoken Language Understanding (CSLU
Executive Functioning and Speech-Language Skills Following Long-Term Use of Cochlear Implants
ERIC Educational Resources Information Center
Kronenberger, William G.; Colson, Bethany G.; Henning, Shirley C.; Pisoni, David B.
2014-01-01
Neurocognitive processes such as executive functioning (EF) may influence the development of speech-language skills in deaf children after cochlear implantation in ways that differ from normal-hearing, typically developing children. Conversely, spoken language abilities and experiences may also exert reciprocal effects on the development of EF.…
Examining Transcription, Autonomy and Reflective Practice in Language Development
ERIC Educational Resources Information Center
Cooke, Simon D.
2013-01-01
This pilot study explores language development among a class of L2 students who were required to transcribe and reflect upon spoken performances. The class was given tasks for self and peer-evaluation and afforded the opportunity to assume more responsibility for assessing language development of both themselves and their peers. Several studies…
Examining the Role of Time and Language Type in Reading Development for English Language Learners
ERIC Educational Resources Information Center
Betts, Joseph; Bolt, Sara; Decker, Dawn; Muyskens, Paul; Marston, Doug
2009-01-01
The purpose of this study was to examine the development of English reading achievement among English Language Learners (ELLs) and to determine whether the time that an ELL's family was in the United States and the type of native language spoken affected their reading development. Participants were 300 third-grade ELLs from two different native…
One grammar or two? Sign Languages and the Nature of Human Language
Lillo-Martin, Diane C; Gajewski, Jon
2014-01-01
Linguistic research has identified abstract properties that seem to be shared by all languages—such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality but also in the form of sign languages has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language—in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression. PMID:25013534
The cultural and linguistic diversity of 3-year-old children with hearing loss.
Crowe, Kathryn; McLeod, Sharynne; Ching, Teresa Y C
2012-01-01
Understanding the cultural and linguistic diversity of young children with hearing loss informs the provision of assessment, habilitation, and education services to both children and their families. Data describing communication mode, oral language use, and demographic characteristics were collected for 406 children with hearing loss and their caregivers when children were 3 years old. The data were from the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study, a prospective, population-based study of children with hearing loss in Australia. The majority of the 406 children used spoken English at home; however, 28 other languages also were spoken. Compared with their caregivers, the children in this study used fewer spoken languages and had higher rates of oral monolingualism. Few children used a spoken language other than English in their early education environment. One quarter of the children used sign to communicate at home and/or in their early education environment. No associations between caregiver hearing status and children's communication mode were identified. This exploratory investigation of the communication modes and languages used by young children with hearing loss and their caregivers provides an initial examination of the cultural and linguistic diversity and heritage language attrition of this population. The findings of this study have implications for the development of resources and the provision of early education services to the families of children with hearing loss, especially where the caregivers use a language that is not the lingua franca of their country of residence.
Subarashii: Encounters in Japanese Spoken Language Education.
ERIC Educational Resources Information Center
Bernstein, Jared; Najmi, Amir; Ehsani, Farzad
1999-01-01
Describes Subarashii, an experimental computer-based interactive spoken-language education system designed to understand what a student is saying in Japanese and respond in a meaningful way in spoken Japanese. Implementation of a preprototype version of the Subarashii system identified strengths and limitations of continuous speech recognition…
Building Spoken Language in the First Plane
ERIC Educational Resources Information Center
Bettmann, Joen
2016-01-01
Through a strong Montessori orientation to the parameters of spoken language, Joen Bettmann makes the case for "materializing" spoken knowledge using the stimulation of real objects and real situations that promote mature discussion around the sensorial aspect of the prepared environment. She lists specific materials in the classroom…
The bridge of iconicity: from a world of experience to the experience of language.
Perniss, Pamela; Vigliocco, Gabriella
2014-09-19
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.
The bridge of iconicity: from a world of experience to the experience of language
Perniss, Pamela; Vigliocco, Gabriella
2014-01-01
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication. PMID:25092668
Unique Auditory Language-Learning Needs of Hearing-Impaired Children: Implications for Intervention.
ERIC Educational Resources Information Center
Johnson, Barbara Ann; Paterson, Marietta M.
Twenty-seven hearing-impaired young adults with hearing potentially usable for language comprehension and a history of speech language therapy participated in this study of training in using residual hearing for the purpose of learning spoken language. Evaluation of their recalled therapy experiences indicated that listening to spoken language did…
Caselli, Naomi K; Pyers, Jennie E
2017-07-01
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Language and reading development in the brain today: neuromarkers and the case for prediction.
Buchweitz, Augusto
2016-01-01
The goal of this article is to provide an account of language development in the brain using the new information about brain function gleaned from cognitive neuroscience. This account goes beyond describing the association between language and specific brain areas to advocate the possibility of predicting language outcomes using brain-imaging data. The goal is to address the current evidence about language development in the brain and prediction of language outcomes. Recent studies will be discussed in the light of the evidence generated for predicting language outcomes and using new methods of analysis of brain data. The present account of brain behavior will address: (1) the development of a hardwired brain circuit for spoken language; (2) the neural adaptation that follows reading instruction and fosters the "grafting" of visual processing areas of the brain onto the hardwired circuit of spoken language; and (3) the prediction of language development and the possibility of translational neuroscience. Brain imaging has allowed for the identification of neural indices (neuromarkers) that reflect typical and atypical language development; the possibility of predicting risk for language disorders has emerged. A mandate to develop a bridge between neuroscience and health and cognition-related outcomes may pave the way for translational neuroscience. Copyright © 2016 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
AI-Based Chatterbots and Spoken English Teaching: A Critical Analysis
ERIC Educational Resources Information Center
Sha, Guoquan
2009-01-01
The aim of various approaches implemented, whether the classical "three Ps" (presentation, practice, and production) or communicative language teaching (CLT), is to achieve communicative competence. Although a lot of software developed for teaching spoken English is dressed up to raise interaction, its methodology is largely rooted in tradition.…
Language as a multimodal phenomenon: implications for language learning, processing and evolution.
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
2014-09-19
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
ERIC Educational Resources Information Center
Nippold, Marilyn A.; Mansfield, Tracy C.; Billow, Jesse L.; Tomblin, J. Bruce
2009-01-01
Purpose: Syntactic development in adolescents was examined using a spoken discourse task and standardized testing. The primary goal was to determine whether adolescents with a history of language impairments would differ from those with a history of typical language development (TLD). This is a companion study to one that examined these same…
Error Awareness and Recovery in Conversational Spoken Language Interfaces
2007-05-01
portant step towards constructing autonomously self -improving systems. Furthermore, we developed a scalable, data-driven approach that allows a system...prob- lems in spoken dialog (as well as other interactive systems) and constitutes an important step towards building autonomously self -improving...implicitly-supervised learning approach is applicable to other problems, and represents an important step towards developing autonomous, self
An Investigation into the State of Status Planning of Tiv Language of Central Nigeria
ERIC Educational Resources Information Center
Terkimbi, Atonde
2016-01-01
The Tiv language is one of the major languages spoken in central Nigeria. The language is of the Benue-Congo subclass of the Bantu parent family. It has over four million speakers spoken in five states of Nigeria. The language like many other Nigerian languages is in dire need of language planning efforts and strategies. Some previous efforts were…
ERIC Educational Resources Information Center
Rietz, Sandra A.
Children will meet one less obstacle to making the transition from spoken to written fluency in language if, during the transition period, they experience written language that corresponds structurally to their spoken language patterns. Familiar children's folksongs, because they contain some of the structure of children's oral language, provide…
Standardization of the Revised Token Test in Bangla
ERIC Educational Resources Information Center
Kumar, Suman; Kumar, Prashant; Kumari, Punam
2013-01-01
Bengali or Bangla is an Indo-Aryan language. It is the state language of West Bengal and Tripura and also spoken in some parts of Assam. Bangla is the official language of Bangladesh. With nearly 230 million speakers (Wikipedia 2010), Bangla is one of the most spoken language in the world. Bangla language is the most commonly used language in West…
On the Conventionalization of Mouth Actions in Australian Sign Language.
Johnston, Trevor; van Roekel, Jane; Schembri, Adam
2016-03-01
This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.
ERIC Educational Resources Information Center
McFerren, Margaret
A survey of the status of language usage in Iran begins with an overview of the usage pattern of Persian, the official language spoken by just over half the population, and the competing languages of six ethnic and linguistic minorities: Azerbaijani, Kurdish, Arabic, Gilaki, Luri-Bakhtiari, and Mazandarani. The development of language policy…
Honoring the Child with Dyslexia in a Montessori Classroom
ERIC Educational Resources Information Center
Skotheim, Meghan Kane
2009-01-01
Speaking, listening, reading, and writing are all language activities. The human capacity for speaking and listening has a biological foundation: wherever there are people, there is spoken language. Acquiring spoken language is an unconscious activity, and, barring any physical deformity or language learning disability, like severe autism, all…
Looking beyond Signed English to Describe the Language of Two Deaf Children.
ERIC Educational Resources Information Center
Suty, Karen A.; Friel-Patti, Sandy
1982-01-01
Examines the spontaneous language of deaf children without forcing the analysis to fit the features of a spoken language system. Suggests linguistic competence of deaf children is commensurate with their cognitive age and is not adequately described by the standard spoken English language tests. (EKN)
Learning English Language by Radio in Primary Schools in Kenya
ERIC Educational Resources Information Center
Odera, Florence Y.
2011-01-01
Radio is one of the most affordable educational technologies available for the use in education and development in developing countries. This article explores the use of school radio broadcast to assist teachers and pupils to learn and improve English language both written and spoken in Kenyan primary schools. English language occupies a central…
ERIC Educational Resources Information Center
Tomblin, J. Bruce, Ed.; Nippold, Marilyn A., Ed.
2014-01-01
This volume presents the findings of a large-scale study of individual differences in spoken (and heard) language development during the school years. The goal of the study was to investigate the degree to which language abilities at school entry were stable over time and influential in the child's overall success in important aspects of…
Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects
ERIC Educational Resources Information Center
Wiseheart, Rebecca; Altmann, Lori J. P.
2018-01-01
Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…
Relationships between Lexical Processing Speed, Language Skills, and Autistic Traits in Children
ERIC Educational Resources Information Center
Abrigo, Erin
2012-01-01
According to current models of spoken word recognition listeners understand speech as it unfolds over time. Eye tracking provides a non-invasive, on-line method to monitor attention, providing insight into the processing of spoken language. In the current project a spoken lexical processing assessment (LPA) confirmed current theories of spoken…
Adaptation and Assessment of a Public Speaking Rating Scale
ERIC Educational Resources Information Center
Iberri-Shea, Gina
2017-01-01
Prominent spoken language assessments such as the Oral Proficiency Interview and the Test of Spoken English have been primarily concerned with speaking ability as it relates to conversation. This paper looks at an additional aspect of spoken language ability, namely public speaking. This study used an adapted form of a public speaking rating scale…
Drop Everything and Write (DEAW): An Innovative Program to Improve Literacy Skills
ERIC Educational Resources Information Center
Joshi, R. Malatesha; Aaron, P. G.; Hill, Nancy; Ocker Dean, Emily; Boulware-Gooden, Regina; Rupley, William H.
2008-01-01
It is believed that language is an innate ability and, therefore, spoken language is acquired naturally and informally. In contrast, written language is thought to be an invention and, therefore, has to be learned through formal instruction. An alternate view, however, is that spoken language and written language are two forms of manifestations of…
The employment of a spoken language computer applied to an air traffic control task.
NASA Technical Reports Server (NTRS)
Laveson, J. I.; Silver, C. A.
1972-01-01
Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.
Spoken Language and Mathematics.
ERIC Educational Resources Information Center
Raiker, Andrea
2002-01-01
States teachers/learners use spoken language in a three part mathematics lesson advocated by the British National Numeracy Strategy. Recognizes language's importance by emphasizing correct use of mathematical vocabulary in raising standards. Finds pupils and teachers appear to ascribe different meanings to scientific words because of their…
The Temporal Structure of Spoken Language Understanding.
ERIC Educational Resources Information Center
Marslen-Wilson, William; Tyler, Lorraine Komisarjevsky
1980-01-01
An investigation of word-by-word time-course of spoken language understanding focused on word recognition and structural and interpretative processes. Results supported an online interactive language processing theory, in which lexical, structural, and interpretative knowledge sources communicate and interact during processing efficiently and…
Spoken English. "Educational Review" Occasional Publications Number Two.
ERIC Educational Resources Information Center
Wilkinson, Andrew; And Others
Modifications of current assumptions both about the nature of the spoken language and about its functions in relation to personality development are suggested in this book. The discussion covers an explanation of "oracy" (the oral skills of speaking and listening); the contributions of linguistics to the teaching of English in Britain; the…
Phonological Awareness: Explicit Instruction for Young Deaf and Hard-of-Hearing Children
ERIC Educational Resources Information Center
Miller, Elizabeth M.; Lederberg, Amy R.; Easterbrooks, Susan R.
2013-01-01
The goal of this study was to explore the development of spoken phonological awareness for deaf and hard-of-hearing children (DHH) with functional hearing (i.e., the ability to access spoken language through hearing). Teachers explicitly taught five preschoolers the phonological awareness skills of syllable segmentation, initial phoneme isolation,…
Visual Sonority Modulates Infants' Attraction to Sign Language
ERIC Educational Resources Information Center
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain
2018-01-01
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
The Primacy of Language Mixing: The Effects of a Matrix System.
ERIC Educational Resources Information Center
Field, Fredric
1999-01-01
Focuses on the differences between bilingual mixtures and creoles. In both types of language, elements and structures of two or more distinct languages are intermingled. By contrasting Nahuatl, spoken in Central Mexico, with Palenquero, a Spanish-based creole spoken near the Caribbean coast of Colombia, examines two components of language thought…
Discussion Forum Interactions: Text and Context
ERIC Educational Resources Information Center
Montero, Begona; Watts, Frances; Garcia-Carbonell, Amparo
2007-01-01
Computer-mediated communication (CMC) is currently used in language teaching as a bridge for the development of written and spoken skills [Kern, R., 1995. "Restructuring classroom interaction with networked computers: effects on quantity and characteristics of language production." "The Modern Language Journal" 79, 457-476]. Within CMC…
Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda
2010-01-01
Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000
NASA Astrophysics Data System (ADS)
Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.
2013-12-01
Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.
Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.
Huettig, Falk; Brouwer, Susanne
2015-05-01
It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.
2015-01-01
Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…
Henry, Maya L; Beeson, Pélagie M; Alexander, Gene E; Rapcsak, Steven Z
2012-02-01
Connectionist theories of language propose that written language deficits arise as a result of damage to semantic and phonological systems that also support spoken language production and comprehension, a view referred to as the "primary systems" hypothesis. The objective of the current study was to evaluate the primary systems account in a mixed group of individuals with primary progressive aphasia (PPA) by investigating the relation between measures of nonorthographic semantic and phonological processing and written language performance and by examining whether common patterns of cortical atrophy underlie impairments in spoken versus written language domains. Individuals with PPA and healthy controls were administered a language battery, including assessments of semantics, phonology, reading, and spelling. Voxel-based morphometry was used to examine the relation between gray matter volumes and language measures within brain regions previously implicated in semantic and phonological processing. In accordance with the primary systems account, our findings indicate that spoken language performance is strongly predictive of reading/spelling profile in individuals with PPA and suggest that common networks of critical left hemisphere regions support central semantic and phonological processes recruited for spoken and written language.
Reflections on Deaf Education: Perspectives of Deaf Senior Citizens
ERIC Educational Resources Information Center
Roberson, Len; Shaw, Sherry
2015-01-01
Parents with deaf children face many challenges in making educational choices, developing language and a sense of belonging. Other key aspects of life including concept development and social competency are also critical decision points faced by parents. Developing language, whether it is through spoken or signed modalities, is of utmost…
ERIC Educational Resources Information Center
Wilson, Leanne; McNeill, Brigid; Gillon, Gail T.
2015-01-01
Successful collaboration among speech and language therapists (SLTs) and teachers fosters the creation of communication friendly classrooms that maximize children's spoken and written language learning. However, these groups of professionals may have insufficient opportunity in their professional study to develop the shared knowledge, perceptions…
ERIC Educational Resources Information Center
Godwin-Jones, Robert
2008-01-01
Creating effective electronic tools for language learning frequently requires large data sets containing extensive examples of actual human language use. Collections of authentic language in spoken and written forms provide developers the means to enrich their applications with real world examples. As the Internet continues to expand…
Spoken Oral Language and Adult Struggling Readers
ERIC Educational Resources Information Center
Bakhtiari, Dariush; Greenberg, Daphne; Patton-Terry, Nicole; Nightingale, Elena
2015-01-01
Oral language is a critical component to the development of reading acquisition. Much of the research concerning the relationship between oral language and reading ability is focused on children, while there is a paucity of research focusing on this relationship for adults who struggle with their reading. Oral language as defined in this paper…
Speech Disruptions in the Narratives of English-Speaking Children with Specific Language Impairment
ERIC Educational Resources Information Center
Guo, Ling-yu; Tomblin, J. Bruce; Samelson, Vicki
2008-01-01
Purpose: This study examined the types, frequencies, and distribution of speech disruptions in the spoken narratives of children with specific language impairment (SLI) and their age-matched (CA) and language-matched (LA) peers. Method: Twenty 4th-grade children with SLI, 20 typically developing CA children, and 20 younger typically developing LA…
ERIC Educational Resources Information Center
Batalova, Jeanne; McHugh, Margie
2010-01-01
While English Language Learner (ELL) students in the United States speak more than 150 languages, Spanish is by far the most common home or first language, but is not the top language spoken by ELLs in every state. This fact sheet, based on analysis of the U.S. Census Bureau's 2009 American Community Survey, documents the top languages spoken…
ERIC Educational Resources Information Center
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
2017-01-01
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
Williams, Joshua T; Newman, Sharlene D
2017-02-01
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.
Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying
ERIC Educational Resources Information Center
Barberà, Gemma; Zwets, Martine
2013-01-01
In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…
ERIC Educational Resources Information Center
Conway, Christopher M.; Karpicke, Jennifer; Pisoni, David B.
2007-01-01
Spoken language consists of a complex, sequentially arrayed signal that contains patterns that can be described in terms of statistical relations among language units. Previous research has suggested that a domain-general ability to learn structured sequential patterns may underlie language acquisition. To test this prediction, we examined the…
Corpus Based Authenicity Analysis of Language Teaching Course Books
ERIC Educational Resources Information Center
Peksoy, Emrah; Harmaoglu, Özhan
2017-01-01
In this study, the resemblance of the language learning course books used in Turkey to authentic language spoken by native speakers is explored by using a corpus-based approach. For this, the 10-million-word spoken part of the British National Corpus was selected as reference corpus. After that, all language learning course books used in high…
Nuffield Early Language Intervention: Evaluation Report and Executive Summary
ERIC Educational Resources Information Center
Sibieta, Luke; Kotecha, Mehul; Skipp, Amy
2016-01-01
The Nuffield Early Language Intervention is designed to improve the spoken language ability of children during the transition from nursery to primary school. It is targeted at children with relatively poor spoken language skills. Three sessions per week are delivered to groups of two to four children starting in the final term of nursery and…
ERIC Educational Resources Information Center
Chambers, Craig G.; Cooke, Hilary
2009-01-01
A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., "Marie va decrire la poule" [Marie will…
ERIC Educational Resources Information Center
Cisilino, William
Rhaeto-Romansh is a Neo-Latin language with three varieties. Occidental Rhaeto-Romansh (Romansh) is spoken in Switzerland, in the Canton of the Grisons. Central Rhaeto-Romansh (Dolomite Ladin) is spoken in some of the Italian Dolomite valleys, in the Province of Belluno, Bozen/Bolzano, and Trento. Oriental Rhaeto-Romansh (Friulian) is spoken in…
Cognitive aging and hearing acuity: modeling spoken language comprehension.
Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda
2015-01-01
The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.
Analytic study of the Tadoma method: language abilities of three deaf-blind subjects.
Chomsky, C
1986-09-01
This study reports on the linguistic abilities of 3 adult deaf-blind subjects. The subjects perceive spoken language through touch, placing a hand on the face of the speaker and monitoring the speaker's articulatory motions, a method of speechreading known as Tadoma. Two of the subjects, deaf-blind since infancy, acquired language and learned to speak through this tactile system; the third subject has used Tadoma since becoming deaf-blind at age 7. Linguistic knowledge and productive language are analyzed, using standardized tests and several tests constructed for this study. The subjects' language abilities prove to be extensive, comparing favorably in many areas with hearing individuals. The results illustrate a relatively minor effect of limited language exposure on eventual language achievement. The results also demonstrate the adequacy of the tactile sense, in these highly trained Tadoma users, for transmitting information about spoken language sufficient to support the development of language and learning to produce speech.
ERIC Educational Resources Information Center
Palmer, Barbara C.; Chen, Chia-I; Chang, Sara; Leclere, Judith T.
2006-01-01
According to the 2000 United States Census, Americans age five and older who speak a language other than English at home grew 47 percent over the preceding decade. This group accounts for slightly less than one in five Americans (17.9%). Among the minority languages spoken in the United States, Asian-language speakers, including Chinese and other…
Working with the Bilingual Child Who Has a Language Delay. Meeting Learning Challenges
ERIC Educational Resources Information Center
Greenspan, Stanley I.
2005-01-01
It is very important to determine if a bilingual child's language delay is simply in English or also in the child's native language. Understandably, many children have higher levels of language development in the language spoken at home. To discover if this is the case, observe the child talking with his parents. Sometimes, even without…
Perfecting Language: Experimenting with Vocabulary Learning
ERIC Educational Resources Information Center
Absalom, Matthew
2014-01-01
One of the thorniest aspects of teaching languages is developing students' vocabulary, yet it is impossible to be "an accurate and highly communicative language user with a very small vocabulary" (Milton, 2009, p. 3). Nation (2006) indicates that more vocabulary than previously thought is required to function well both at spoken and…
Parent Telegraphic Speech Use and Spoken Language in Preschoolers with ASD
ERIC Educational Resources Information Center
Venker, Courtney E.; Bolt, Daniel M.; Meyer, Allison; Sindberg, Heidi; Weismer, Susan Ellis; Tager-Flusberg, Helen
2015-01-01
Purpose: There is considerable controversy regarding whether to use telegraphic or grammatical input when speaking to young children with language delays, including children with autism spectrum disorder (ASD). This study examined telegraphic speech use in parents of preschoolers with ASD and associations with children's spoken language 1 year…
Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension
ERIC Educational Resources Information Center
Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica
2015-01-01
When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…
Sources of Difficulty in the Processing of Written Language. Report Series 4.3.
ERIC Educational Resources Information Center
Chafe, Wallace
Ease of language processing varies with the nature of the language involved. Ordinary spoken language is the easiest kind to produce and understand, while writing is a relatively new development. On thoughtful inspection, the readability of writing has shown itself to be a complex topic requiring insights from many academic disciplines and…
ERIC Educational Resources Information Center
Goldin-Meadow, Susan; Namboodiripad, Savithry; Mylander, Carolyn; Özyürek, Asli; Sancar, Burcu
2015-01-01
Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called "homesigns", which have many of the properties of natural language--the so-called resilient properties of language. We explored the resilience of structure built…
The Development of Language Behavior in an Autistic Child Using a Total Communication Approach.
ERIC Educational Resources Information Center
Cohen, Morris
Following a review of the literature, the paper describes a total communication approach to the language development of a 4-year-old autistic child. It is explained that the child was videotaped while being trained to simultaneously use elements of American sign language together with the correct spoken word or words. Training procedures are…
Fujiwara, Keizo; Naito, Yasushi; Senda, Michio; Mori, Toshiko; Manabe, Tomoko; Shinohara, Shogo; Kikuchi, Masahiro; Hori, Shin-Ya; Tona, Yosuke; Yamazaki, Hiroshi
2008-04-01
The use of fluorodeoxyglucose positron emission tomography (FDG-PET) with a visual language task provided objective information on the development and plasticity of cortical language networks. This approach could help individuals involved in the habilitation and education of prelingually deafened children to decide upon the appropriate mode of communication. To investigate the cortical processing of the visual component of language and the effect of deafness upon this activity. Six prelingually deafened children participated in this study. The subjects were numbered 1-6 in the order of their spoken communication skills. In the time period between an intravenous injection of 370 MBq 18F-FDG and PET scanning of the brain, each subject was instructed to watch a video of the face of a speaking person. The cortical radioactivity of each deaf child was compared with that of a group of normal- hearing adults using a t test in a basic SPM2 model. The widest bilaterally activated cortical area was detected in subject 1, who was the worst user of spoken language. By contrast, there was no significant difference between subject 6, who was the best user of spoken language with a hearing aid, and the normal hearing group.
Orthographic effects in spoken word recognition: Evidence from Chinese.
Qu, Qingqing; Damian, Markus F
2017-06-01
Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.
Phonological awareness: explicit instruction for young deaf and hard-of-hearing children.
Miller, Elizabeth M; Lederberg, Amy R; Easterbrooks, Susan R
2013-04-01
The goal of this study was to explore the development of spoken phonological awareness for deaf and hard-of-hearing children (DHH) with functional hearing (i.e., the ability to access spoken language through hearing). Teachers explicitly taught five preschoolers the phonological awareness skills of syllable segmentation, initial phoneme isolation, and rhyme discrimination in the context of a multifaceted emergent literacy intervention. Instruction occurred in settings where teachers used simultaneous communication or spoken language only. A multiple-baseline across skills design documented a functional relation between instruction and skill acquisition for those children who did not have the skills at baseline with one exception; one child did not meet criteria for syllable segmentation. These results were confirmed by changes on phonological awareness tests that were administered at the beginning and end of the school year. We found that DHH children who varied in primary communication mode, chronological age, and language ability all benefited from explicit instruction in phonological awareness.
Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items
ERIC Educational Resources Information Center
Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.
2013-01-01
An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…
User-Centred Design for Chinese-Oriented Spoken English Learning System
ERIC Educational Resources Information Center
Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting
2016-01-01
Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…
Phonological Awareness in Mandarin of Chinese and Americans
ERIC Educational Resources Information Center
Hu, Min
2009-01-01
Phonological awareness (PA) is the ability to analyze spoken language into its component sounds and to manipulate these smaller units. Literature review related to PA shows that a variety of factor groups play a role in PA in Mandarin such as linguistic experience (spoken language, alphabetic literacy, and second language learning), item type,…
A Mother Tongue Spoken Mainly by Fathers.
ERIC Educational Resources Information Center
Corsetti, Renato
1996-01-01
Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…
Understanding Communication among Deaf Students Who Sign and Speak: A Trivial Pursuit?
ERIC Educational Resources Information Center
Marschark, Marc; Convertino, Carol M.; Macias, Gayle; Monikowski, Christine M.; Sapere, Patricia; Seewagen, Rosemarie
2007-01-01
Classroom communication between deaf students was modeled using a question-and-answer game. Participants consisted of student pairs that relied on spoken language, pairs that relied on American Sign Language (ASL), and mixed pairs in which one student used spoken language and one signed. Although the task encouraged students to request…
ERIC Educational Resources Information Center
Woll, Bencie; Morgan, Gary
2012-01-01
Various theories of developmental language impairments have sought to explain these impairments in modality-specific ways--for example, that the language deficits in SLI or Down syndrome arise from impairments in auditory processing. Studies of signers with language impairments, especially those who are bilingual in a spoken language as well as a…
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Select Committee on Indian Affairs.
Past U.S. policies toward Indian and other Native American languages have attempted to suppress the use of the languages in government-operated Indian schools for assimilating Indian children. About 155 Native languages are spoken today in the United States, but only 20 are spoken by people of all ages. The Native American Languages Act of 1990…
ERIC Educational Resources Information Center
Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
2018-01-01
Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…
ERIC Educational Resources Information Center
Al-Nofaie, Haifa
2018-01-01
This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…
Language Immersion in the Self-Study Mode E-Course
ERIC Educational Resources Information Center
Sobolev, Olga
2016-01-01
This paper assesses the efficiency of the "Language Immersion e-Course" developed at the London School of Economics and Political Science (LSE) Language Centre. The new self-study revision e-course, promoting students' proficiency in spoken and aural Russian through autonomous learning, is based on the Michel Thomas method, and is…
ERIC Educational Resources Information Center
Andrews, Jean F.; Rusher, Melissa
2010-01-01
The authors present a perspective on emerging bilingual deaf students who are exposed to, learning, and developing two languages--American Sign Language (ASL) and English (spoken English, manually coded English, and English reading and writing). The authors suggest that though deaf children may lack proficiency or fluency in either language during…
Indirect Language Stimulation (ILS): AAC Techniques To Promote Communication Competence.
ERIC Educational Resources Information Center
Boose, Martha A.; Stinnett, Tessa
This report discusses the outcomes of a study that used indirect language stimulation techniques and modeling to encourage language development in a 5-year-old child with cerebral palsy. Initially, the student's communication system had very severe limitations. He used fewer than 10 spoken words which were unintelligible to most listeners. Both…
The Hebrew CHILDES corpus: transcription and morphological analysis
Albert, Aviad; MacWhinney, Brian; Nir, Bracha
2014-01-01
We present a corpus of transcribed spoken Hebrew that reflects spoken interactions between children and adults. The corpus is an integral part of the CHILDES database, which distributes similar corpora for over 25 languages. We introduce a dedicated transcription scheme for the spoken Hebrew data that is sensitive to both the phonology and the standard orthography of the language. We also introduce a morphological analyzer that was specifically developed for this corpus. The analyzer adequately covers the entire corpus, producing detailed correct analyses for all tokens. Evaluation on a new corpus reveals high coverage as well. Finally, we describe a morphological disambiguation module that selects the correct analysis of each token in context. The result is a high-quality morphologically-annotated CHILDES corpus of Hebrew, along with a set of tools that can be applied to new corpora. PMID:25419199
Ivanova, Maria V.; Hallowell, Brooke
2013-01-01
Background There are a limited number of aphasia language tests in the majority of the world's commonly spoken languages. Furthermore, few aphasia tests in languages other than English have been standardized and normed, and few have supportive psychometric data pertaining to reliability and validity. The lack of standardized assessment tools across many of the world's languages poses serious challenges to clinical practice and research in aphasia. Aims The current review addresses this lack of assessment tools by providing conceptual and statistical guidance for the development of aphasia assessment tools and establishment of their psychometric properties. Main Contribution A list of aphasia tests in the 20 most widely spoken languages is included. The pitfalls of translating an existing test into a new language versus creating a new test are outlined. Factors to consider in determining test content are discussed. Further, a description of test items corresponding to different language functions is provided, with special emphasis on implementing important controls in test design. Next, a broad review of principal psychometric properties relevant to aphasia tests is presented, with specific statistical guidance for establishing psychometric properties of standardized assessment tools. Conclusions This article may be used to help guide future work on developing, standardizing and validating aphasia language tests. The considerations discussed are also applicable to the development of standardized tests of other cognitive functions. PMID:23976813
ERIC Educational Resources Information Center
Sobolev, Olga; Nesterova, Tatiana
2014-01-01
Language testing and second language acquisition research are both concerned with proficiency in the second language; given this shared interest, the rapprochement between these two domains may prove revealing and productive not only in terms of teaching practices, but also in taking a wide view of language, ranging across cognition, society and…
ERIC Educational Resources Information Center
Chebanne, Andy
2016-01-01
Khoisan languages are spoken by various culturally diverse communities of Southern Africa. These languages also present an important linguistic diversity. Some of Khoisan languages communities are generally under-researched, marginalized and experiencing sustained sociolinguistic forces that threaten them. For those that have been documented,…
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
2014-01-01
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language. PMID:24904497
ERIC Educational Resources Information Center
Zayed, Jihan El-Sayed Ahmed
2009-01-01
This study aimed at determining the effectiveness of using reflection in developing Tourism and Hospitality students' oracy in English. Two modes of reflection (i.e., "active reflection" and "proactive reflection") were used for developing two aspects of oracy: language awareness of some features of spoken language (i.e.,…
Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert
2017-03-01
Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and socio-economic background during adolescence, particularly in populations without language impairment. To investigate: (1) whether adolescents with higher educational outcomes overall had higher language abilities; and (2) associations between adolescent language ability, socio-economic background and educational outcomes, specifically in relation to Mathematics, English Language and English Literature GCSE grade. A total of 151 participants completed five standardized language assessments measuring vocabulary, comprehension of sentences and spoken paragraphs, and narrative skills and one nonverbal assessment when between 13 and 14 years old. These data were compared with the participants' educational achievement obtained upon leaving secondary education (16 years old). Univariate logistic regressions were employed to identify those language assessments and demographic factors that were associated with achieving a targeted A * -C grade in English Language, English Literature and Mathematics General Certificate of Secondary Education (GCSE) at 16 years. Further logistic regressions were then conducted to examine further the contribution of socio-economic background and spoken language skills in the multivariate models. Vocabulary, comprehension of sentences and spoken paragraphs, and mean length utterance in a narrative task along with socio-economic background contributed to whether participants achieved an A * -C grade in GCSE Mathematics and English Language and English Literature. Nonverbal ability contributed to English Language and Mathematics. The results of multivariate logistic regressions then found that vocabulary skills were particularly relevant to all three GCSE outcomes. Socio-economic background only remained important for English Language, once language assessment scores and demographic information were considered. Language ability, and in particular vocabulary, plays an important role for educational achievement. Results confirm a need for ongoing support for spoken language ability throughout secondary education and a potential role for speech and language therapy provision in the continuing drive to reduce the gap in educational attainment between groups from differing socio-economic backgrounds. © 2016 Royal College of Speech and Language Therapists.
L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm
ERIC Educational Resources Information Center
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
2016-01-01
The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…
Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?
ERIC Educational Resources Information Center
Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.
2013-01-01
Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…
ERIC Educational Resources Information Center
Geytenbeek, Joke J. M.; Heim, Margriet J. M.; Knol, Dirk L.; Vermeulen, R. Jeroen; Oostrom, Kim J.
2015-01-01
Background Children with severe cerebral palsy (CP) (i.e. "non-speaking children with severely limited mobility") are restricted in many domains that are important to the acquisition of language. Aims To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. Methods & Procedures…
Examining the Concept of Subordination in Spoken L1 and L2 English: The Case of "If"-Clauses
ERIC Educational Resources Information Center
Basterrechea, María; Weinert, Regina
2017-01-01
This article explores the applications of research on native spoken language into second language learning in the concept of subordination. Second language (L2) learners' ability to integrate subordinate clauses is considered an indication of higher proficiency (e.g., Ellis & Barkhuizen, 2005; Tarone & Swierzbin, 2009). However, the notion…
Developing Corpus-Based Materials to Teach Pragmatic Routines
ERIC Educational Resources Information Center
Bardovi-Harlig, Kathleen; Mossman, Sabrina; Vellenga, Heidi E.
2015-01-01
This article describes how to develop teaching materials for pragmatics based on authentic language by using a spoken corpus. The authors show how to use the corpus in conjunction with textbooks to identify pragmatic routines for speech acts and how to extract appropriate language samples and adapt them for classroom use. They demonstrate how to…
Action and object word writing in a case of bilingual aphasia.
Kambanaros, Maria; Messinis, Lambros; Anyfantis, Emmanouil
2012-01-01
We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.
Evans, Julia L; Gillam, Ronald B; Montgomery, James W
2018-05-10
This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.
Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G
2016-12-01
Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Linking Language with Embodied and Teleological Representations of Action for Humanoid Cognition
Lallee, Stephane; Madden, Carol; Hoen, Michel; Dominey, Peter Ford
2010-01-01
The current research extends our framework for embodied language and action comprehension to include a teleological representation that allows goal-based reasoning for novel actions. The objective of this work is to implement and demonstrate the advantages of a hybrid, embodied-teleological approach to action–language interaction, both from a theoretical perspective, and via results from human–robot interaction experiments with the iCub robot. We first demonstrate how a framework for embodied language comprehension allows the system to develop a baseline set of representations for processing goal-directed actions such as “take,” “cover,” and “give.” Spoken language and visual perception are input modes for these representations, and the generation of spoken language is the output mode. Moving toward a teleological (goal-based reasoning) approach, a crucial component of the new system is the representation of the subcomponents of these actions, which includes relations between initial enabling states, and final resulting states for these actions. We demonstrate how grammatical categories including causal connectives (e.g., because, if–then) can allow spoken language to enrich the learned set of state-action-state (SAS) representations. We then examine how this enriched SAS inventory enhances the robot's ability to represent perceived actions in which the environment inhibits goal achievement. The paper addresses how language comes to reflect the structure of action, and how it can subsequently be used as an input and output vector for embodied and teleological aspects of action. PMID:20577629
Overlapping Networks Engaged during Spoken Language Production and Its Cognitive Control
Wise, Richard J.S.; Mehta, Amrish; Leech, Robert
2014-01-01
Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and “rest,” to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. PMID:24966373
Overlapping networks engaged during spoken language production and its cognitive control.
Geranmayeh, Fatemeh; Wise, Richard J S; Mehta, Amrish; Leech, Robert
2014-06-25
Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and "rest," to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. Copyright © 2014 Geranmayeh et al.
Pizer, Ginger; Walters, Keith; Meier, Richard P
2013-01-01
Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."
Enduring Advantages of Early Cochlear Implantation for Spoken Language Development
Geers, Ann E.; Nicholas, Johanna G.
2013-01-01
Purpose To determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12–38 months of age. Relative advantages of receiving a bilateral CI after age 4.5, better pre-CI aided hearing, and longer CI experience were also examined. Method Sixty children participated in a prospective longitudinal study of outcomes at 4.5 and 10.5 years of age. Twenty-nine children received a sequential second CI. Test scores were compared to normative samples of hearing age-mates and predictors of outcomes identified. Results Standard scores on language tests at 10.5 years of age remained significantly correlated with age of first cochlear implantation. Scores were not associated with receipt of a second, sequentially-acquired CI. Significantly higher scores were achieved for vocabulary as compared with overall language, a finding not evident when the children were tested at younger ages. Conclusion Age-appropriate spoken language skills continued to be more likely with younger AOI, even after an average of 8.6 years of additional CI use. Receipt of a second implant between ages 4–10 years and longer duration of device use did not provide significant added benefit. PMID:23275406
ERIC Educational Resources Information Center
Brown, Gillian
1981-01-01
Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…
ERIC Educational Resources Information Center
Cutting, Joan; Murphy, Brona
2010-01-01
The seminar, organised by Joan Cutting and Brona Murphy, aimed: (1) to bring together researchers involved in both emergent and established academic corpora (written and spoken) as well as linguists, lecturers and teachers researching in education, be it language teaching, language-teacher training or continuing professional development in…
ERIC Educational Resources Information Center
Harper-Hill, Keely; Copland, David; Arnott, Wendy
2013-01-01
The primary aim of this paper was to investigate heterogeneity in language abilities of children with a confirmed diagnosis of an ASD (N = 20) and children with typical development (TD; N = 15). Group comparisons revealed no differences between ASD and TD participants on standard clinical assessments of language ability, reading ability or…
ERIC Educational Resources Information Center
Suendermann-Oeft, David; Ramanarayanan, Vikram; Yu, Zhou; Qian, Yao; Evanini, Keelan; Lange, Patrick; Wang, Xinhao; Zechner, Klaus
2017-01-01
We present work in progress on a multimodal dialog system for English language assessment using a modular cloud-based architecture adhering to open industry standards. Among the modules being developed for the system, multiple modules heavily exploit machine learning techniques, including speech recognition, spoken language proficiency rating,…
State-of-the-Art in the Development of the Lokono Language
ERIC Educational Resources Information Center
Rybka, Konrad
2015-01-01
Lokono is a critically endangered Northern Arawakan language spoken in the pericoastal areas of the Guianas (Guyana, Suriname, French Guiana). Today, in every Lokono village there remains only a small number of elderly native speakers. However, in spite of the ongoing language loss, across the three Guianas as well as in the Netherlands, where a…
Splenium Development and Early Spoken Language in Human Infants
ERIC Educational Resources Information Center
Swanson, Meghan R.; Wolff, Jason J.; Elison, Jed T.; Gu, Hongbin; Hazlett, Heather C.; Botteron, Kelly; Styner, Martin; Paterson, Sarah; Gerig, Guido; Constantino, John; Dager, Stephen; Estes, Annette; Vachet, Clement; Piven, Joseph
2017-01-01
The association between developmental trajectories of language-related white matter fiber pathways from 6 to 24 months of age and individual differences in language production at 24 months of age was investigated. The splenium of the corpus callosum, a fiber pathway projecting through the posterior hub of the default mode network to occipital…
The Languages in Nigerian Socio-Political Domains: Features and Functions
ERIC Educational Resources Information Center
Ayeomoni, Moses Omoniyi
2012-01-01
This paper views Nigeria as a multilingual country with diverse languages and cultures to the extent that the total number of languages spoken right now in Nigeria is about 500 (See Adegbite 2010). This linguistic diversity in the country has occasioned the development and the spread of the concepts of bilingualism, multilingualism, diglossia and…
Brebner, Chris; McCormack, Paul; Liow, Susan Rickard
2016-01-01
The phonological and morphosyntactic structures of English and Mandarin contrast maximally and an increasing number of bilinguals speak these two languages. Speech and language therapists need to understand bilingual development for children speaking these languages in order reliably to assess and provide intervention for this population. To examine the marking of verb tense in the English of two groups of bilingual pre-schoolers learning these languages in a multilingual setting where the main educational language is English. The main research question addressed was: are there differences in the rate and pattern of acquisition of verb-tense marking for English-language 1 children compared with Mandarin-language 1 children? Spoken language samples in English from 481 English-Mandarin bilingual children were elicited using a 10-item action picture test and analysed for each child's use of verb tense markers: present progressive '-ing', regular past tense '-ed', third-person singular '-s', and irregular past tense and irregular past-participle forms. For 4-6 year olds the use of inflectional markers by the different language dominance groups was compared statistically using non-parametric tests. This study provides further evidence that bilingual language development is not the same as monolingual language development. The results show that there are very different rates and patterns of verb-tense marking in English for English-language 1 and Mandarin-language 1 children. Furthermore, they show that bilingual language development in English in Singapore is not the same as monolingual language development in English, and that there are differences in development depending on language dominance. Valid and reliable assessment of bilingual children's language skills needs to consider the characteristics of all languages spoken, obtaining accurate information on language use over time and accurately establishing language dominance is essential in order to make a differential diagnosis between language difference and impairment. © 2015 Royal College of Speech and Language Therapists.
ERIC Educational Resources Information Center
Jacobs, Sue-Ellen; Tuttle, Siri G.; Martinez, Esther
1998-01-01
The Tewa Language Project CD-ROM was developed at the University of Washington in collaboration with San Juan Pueblo, New Mexico, to restore the use of spoken and written Tewa and to repatriate cultural property. The CD-ROM contains an interactive multimedia dictionary, songs, stories, photographs, land and water data, and linguistic resources…
Glossary of Terms Relating to Languages of the Middle East.
ERIC Educational Resources Information Center
Ferguson, Charles A.
This glossary gives brief, non-technical explanations of the following kinds of terms: (1) names of all important languages now spoken in the Middle East, or known to have been spoken in the area; (2) names of language families represented in the area; (3) descriptive terms used with reference to the writing systems of the area; (4) names of…
The Lightening Veil: Language Revitalization in Wales
ERIC Educational Resources Information Center
Williams, Colin H.
2014-01-01
The Welsh language, which is indigenous to Wales, is one of six Celtic languages. It is spoken by 562,000 speakers, 19% of the population of Wales, according to the 2011 U.K. Census, and it is estimated that it is spoken by a further 200,000 residents elsewhere in the United Kingdom. No exact figures exist for the undoubted thousands of other…
ERIC Educational Resources Information Center
Shaw, Emily P.
2013-01-01
This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…
ERIC Educational Resources Information Center
Rodd, Jennifer M.; Longe, Olivia A.; Randall, Billi; Tyler, Lorraine K.
2010-01-01
Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities…
Notes from the Field: Lolak--Another Moribund Language of Indonesia, with Supporting Audio
ERIC Educational Resources Information Center
Lobel, Jason William; Paputungan, Ade Tatak
2017-01-01
This paper consists of a short multimedia introduction to Lolak, a near-extinct Greater Central Philippine language traditionally spoken in three small communities on the island of Sulawesi in Indonesia. In addition to being one of the most underdocumented languages in the area, it is also spoken by one of the smallest native speaker populations…
La mort d'une langue: le judeo-espagnol (The Death of a Language: The Spanish Spoken by Jews)
ERIC Educational Resources Information Center
Renard, Raymond
1971-01-01
Describes the Sephardic culture which flourished in the Balkans, Ottoman Empire, and North Africa during the Middle Ages. Suggests the use of Ladino", the language of medieval Spain spoken by the expelled Jews. (DS)
Speech perception and spoken word recognition: past and present.
Jusezyk, Peter W; Luce, Paul A
2002-02-01
The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
Tur-Kaspa, Hana; Dromi, Esther
2001-04-01
The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.
ERIC Educational Resources Information Center
Marascuilo, Leonard A.; Loban, Walter
To determine whether language behavior represents an early conditioned verbal response or whether it changes with age and experience was the purpose of this study which attempted to define unique isolates of language on the basis of actual language produced by young children. Tape recorded data were collected for 12 years from 211 children in…
TEACHING DEAF CHILDREN TO TALK.
ERIC Educational Resources Information Center
EWING, ALEXANDER; EWING, ETHEL C.
DESIGNED AS A TEXT FOR AUDIOLOGISTS AND TEACHERS OF HEARING IMPAIRED CHILDREN, THIS BOOK PRESENTS BASIC INFORMATION ABOUT SPOKEN LANGUAGE, HEARING, AND LIPREADING. METHODS AND RESULTS OF EVALUATING SPOKEN LANGUAGE OF AURALLY HANDICAPPED CHILDREN WITHOUT USING READING OR WRITING ARE REPORTED. VARIOUS TYPES OF INDIVIDUAL AND GROUP HEARING AIDS ARE…
ERIC Educational Resources Information Center
Huang, Li-Shih
2010-01-01
This paper reports on a small-scale study that was the first to explore raising second-language (L2) learners' awareness of speaking strategies as mediated by three modalities of task-specific reflection--individual written reflection, individual spoken reflection, and group spoken reflection. Though research in such areas as L2 writing, teacher's…
ERIC Educational Resources Information Center
Minaabad, Malahat Shabani
2011-01-01
Translation is the process to transfer written or spoken source language (SL) texts to equivalent written or spoken target language (TL) texts. Translation studies (TS) relies so heavily on a concept of meaning, that one may claim that there is no TS without any reference to meanings. People's understanding of the meaning of sentences is far more…
NASA Astrophysics Data System (ADS)
Maskeliunas, Rytis; Rudzionis, Vytautas
2011-06-01
In recent years various commercial speech recognizers have become available. These recognizers provide the possibility to develop applications incorporating various speech recognition techniques easily and quickly. All of these commercial recognizers are typically targeted to widely spoken languages having large market potential; however, it may be possible to adapt available commercial recognizers for use in environments where less widely spoken languages are used. Since most commercial recognition engines are closed systems the single avenue for the adaptation is to try set ways for the selection of proper phonetic transcription methods between the two languages. This paper deals with the methods to find the phonetic transcriptions for Lithuanian voice commands to be recognized using English speech engines. The experimental evaluation showed that it is possible to find phonetic transcriptions that will enable the recognition of Lithuanian voice commands with recognition accuracy of over 90%.
Learning with a missing sense: what can we learn from the interaction of a deaf child with a turtle?
Miller, Paul
2009-01-01
This case study reports on the progress of Navon, a 13-year-old boy with prelingual deafness, over a 3-month period following exposure to Logo, a computer programming language that visualizes specific programming commands by means of a virtual drawing tool called the Turtle. Despite an almost complete lack of skills in spoken and sign language, Navon made impressive progress in his programming skills, including acquisition of a notable active written vocabulary, which he learned to apply in a purposeful, rule-based manner. His achievements are discussed with reference to commonly held assumptions about the relationship between language and thought, in general, and the prerequisite of proper spoken language skills for the acquisition of reading and writing, in particular. Highlighted are the central principles responsible for Navon's unexpected cognitive and linguistic development, including the way it affected his social relations with peers and teachers.
Active Learning for Automatic Audio Processing of Unwritten Languages (ALAPUL)
2016-07-01
AFRL-RH-WP-TR-2016-0074 ACTIVE LEARNING FOR AUTOMATIC AUDIO PROCESSING OF UNWRITTEN LANGUAGES (ALAPUL) Dimitra Vergyri Andreas Kathol Wen Wang...June 2015-July 2016 4. TITLE AND SUBTITLE Active Learning for Automatic Audio Processing of Unwritten Languages (ALAPUL) 5a. CONTRACT NUMBER...5430, 27 October 2016 1. SUMMARY The goal of the project was to investigate development of an automatic spoken language processing (ASLP) system
ERIC Educational Resources Information Center
Smolík, Filip; Stepankova, Hana; Vyhnálek, Martin; Nikolai, Tomáš; Horáková, Karolína; Matejka, Štepán
2016-01-01
Purpose Propositional density (PD) is a measure of content richness in language production that declines in normal aging and more profoundly in dementia. The present study aimed to develop a PD scoring system for Czech and use it to compare PD in language productions of older people with amnestic mild cognitive impairment (aMCI) and control…
Auditory Word Recognition of Nouns and Verbs in Children with Specific Language Impairment (SLI)
ERIC Educational Resources Information Center
Andreu, Llorenc; Sanz-Torrent, Monica; Guardia-Olmos, Joan
2012-01-01
Nouns are fundamentally different from verbs semantically and syntactically, since verbs can specify one, two, or three nominal arguments. In this study, 25 children with Specific Language Impairment (age 5;3-8;2 years) and 50 typically developing children (3;3-8;2 years) participated in an eye-tracking experiment of spoken language comprehension…
ERIC Educational Resources Information Center
Andreu, Llorenc; Sanz-Torrent, Monica; Trueswell, John C.
2013-01-01
Twenty-five children with specific language impairment (SLI; age 5 years, 3 months [5;3]-8;2), 50 typically developing children (3;3-8;2), and 31 normal adults participated in three eye-tracking experiments of spoken language comprehension that were designed to investigate the use of verb information during real-time sentence comprehension in…
ERIC Educational Resources Information Center
Cain, Kate, Ed.; Oakhill, Jane, Ed.
2007-01-01
Comprehension is the ultimate aim of reading and listening. How do children develop the ability to comprehend written and spoken language, and what can be done to help those who are having difficulties? This book presents cutting-edge research on comprehension problems experienced by children without any formal diagnosis as well as those with…
Spelling Well Despite Developmental Language Disorder: What Makes it Possible?
Rakhlin, Natalia; Cardoso-Martins, Cláudia; Kornilov, Sergey A.; Grigorenko, Elena L.
2013-01-01
The goal of the study was to investigate the overlap between Developmental Language Disorder (DLD) and Developmental Dyslexia, identified through spelling difficulties (SD), in Russian-speaking children. In particular, we studied the role of phoneme awareness (PA), rapid automatized naming (RAN), pseudoword repetition (PWR), morphological (MA) and orthographic awareness (OA) in differentiating between children with DLD who have SD from children with DLD who are average spellers by comparing the two groups to each other, to typically developing children as well as children with SD but without spoken language deficits. One hundred forty nine children, aged 10.40 to 14.00, participated in the study. The results indicated that the SD, DLD, and DLD/SD groups did not differ from each other on PA and RAN Letters and underperformed in comparison to the control groups. However, whereas the children with written language deficits (SD and DLD/SD groups) underperformed on RAN Objects and Digits, PWR, OA and MA, the children with DLD and no SD performed similarly to the children from the control groups on these measures. In contrast, the two groups with spoken language deficits (DLD and DLD/SD) underperformed on RAN Colors in comparison to the control groups and the group of children with SD only. The results support the notion that those children with DLD who have unimpaired PWR and RAN skills are able to overcome their weaknesses in spoken language and PA and acquire basic literacy on a par with their age peers with typical language. We also argue that our findings support a multifactorial model of developmental language disorders (DLD). PMID:23860907
Audio Visual Technology and the Teaching of Foreign Languages.
ERIC Educational Resources Information Center
Halbig, Michael C.
Skills in comprehending the spoken language source are becoming increasingly important due to the audio-visual orientation of our culture. It would seem natural, therefore, to adjust the learning goals and environment accordingly. The video-cassette machine is an ideal means for creating this learning environment and developing the listening…
ERIC Educational Resources Information Center
Freeman, Valerie; Pisoni, David B.; Kronenberger, William G.; Castellanos, Irina
2017-01-01
Deaf children with cochlear implants (CIs) are at risk for psychosocial adjustment problems, possibly due to delayed speech-language skills. This study investigated associations between a core component of spoken-language ability--speech intelligibility--and the psychosocial development of prelingually deaf CI users. Audio-transcription measures…
Inferring Speaker Affect in Spoken Natural Language Communication
ERIC Educational Resources Information Center
Pon-Barry, Heather Roberta
2013-01-01
The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…
Development of Mandarin spoken language after pediatric cochlear implantation.
Li, Bei; Soli, Sigfrid D; Zheng, Yun; Li, Gang; Meng, Zhaoli
2014-07-01
The purpose of this study was to evaluate early spoken language development in young Mandarin-speaking children during the first 24 months after cochlear implantation, as measured by receptive and expressive vocabulary growth rates. Growth rates were compared with those of normally hearing children and with growth rates for English-speaking children with cochlear implants. Receptive and expressive vocabularies were measured with the simplified short form (SSF) version of the Mandarin Communicative Development Inventory (MCDI) in a sample of 112 pediatric implant recipients at baseline, 3, 6, 12, and 24 months after implantation. Implant ages ranged from 1 to 5 years. Scores were expressed in terms of normal equivalent ages, allowing normalized vocabulary growth rates to be determined. Scores for English-speaking children were re-expressed in these terms, allowing direct comparisons of Mandarin and English early spoken language development. Vocabulary growth rates during the first 12 months after implantation were similar to those for normally hearing children less than 16 months of age. Comparisons with growth rates for normally hearing children 16-30 months of age showed that the youngest implant age group (1-2 years) had an average growth rate of 0.68 that of normally hearing children; while the middle implant age group (2-3 years) had an average growth rate of 0.65; and the oldest implant age group (>3 years) had an average growth rate of 0.56, significantly less than the other two rates. Growth rates for English-speaking children with cochlear implants were 0.68 in the youngest group, 0.54 in the middle group, and 0.57 in the oldest group. Growth rates in the middle implant age groups for the two languages differed significantly. The SSF version of the MCDI is suitable for assessment of Mandarin language development during the first 24 months after cochlear implantation. Effects of implant age and duration of implantation can be compared directly across languages using normalized vocabulary growth rates. These comparisons for Mandarin and English reveal comparable results, despite the diversity of these languages, underscoring the universal role of plasticity in the developing auditory system. Copyright © 2014. Published by Elsevier Ireland Ltd.
Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
2011-01-01
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.
ERIC Educational Resources Information Center
Werfel, Krystal L.
2017-01-01
Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…
ERIC Educational Resources Information Center
Hoover, Jill R.
2018-01-01
Purpose: The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD). Method: Fifteen children with SLI ("M" age = 6;5 [years;months]) and 15 with TD ("M" age = 6;4) completed a…
Individual differences in online spoken word recognition: Implications for SLI
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
2012-01-01
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014
Preference for language in early infancy: the human language bias is not speech specific.
Krentz, Ursula C; Corina, David P
2008-01-01
Fundamental to infants' acquisition of their native language is an inherent interest in the language spoken around them over non-linguistic environmental sounds. The following studies explored whether the bias for linguistic signals in hearing infants is specific to speech, or reflects a general bias for all human language, spoken and signed. Results indicate that 6-month-old infants prefer an unfamiliar, visual-gestural language (American Sign Language) over non-linguistic pantomime, but 10-month-olds do not. These data provide evidence against a speech-specific bias in early infancy and provide insights into those properties of human languages that may underlie this language-general attentional bias.
Sutton, Ann; Trudeau, Natacha; Morford, Jill; Rios, Monica; Poirier, Marie-Andrée
2010-01-01
Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression (visual). This study explored the ability of three- and four-year-old children without disabilities to perform tasks involving sequences of graphic symbols. Thirty participants were asked to transpose spoken simple sentences into graphic symbols by selecting individual symbols corresponding to the spoken words, and to interpret graphic symbol utterances by selecting one of four photographs corresponding to a sequence of three graphic symbols. The results showed that these were not simple tasks for the participants, and few of them performed in the expected manner - only one in transposition, and only one-third of participants in interpretation. Individual response strategies in some cases lead to contrasting response patterns. Children at this age level have not yet developed the skills required to deal with graphic symbols even though they have mastered the corresponding spoken language structures.
Social scale and structural complexity in human languages.
Nettle, Daniel
2012-07-05
The complexity of different components of the grammars of human languages can be quantified. For example, languages vary greatly in the size of their phonological inventories, and in the degree to which they make use of inflectional morphology. Recent studies have shown that there are relationships between these types of grammatical complexity and the number of speakers a language has. Languages spoken by large populations have been found to have larger phonological inventories, but simpler morphology, than languages spoken by small populations. The results require further investigation, and, most importantly, the mechanism whereby the social context of learning and use affects the grammatical evolution of a language needs elucidation.
E-cigarette use and disparities by race, citizenship status and language among adolescents.
Alcalá, Héctor E; Albert, Stephanie L; Ortega, Alexander N
2016-06-01
E-cigarette use among adolescents is on the rise in the U.S. However, limited attention has been given to examining the role of race, citizenship status and language spoken at home in shaping e-cigarette use behavior. Data are from the 2014 Adolescent California Health Interview Survey, which interviewed 1052 adolescents ages 12-17. Lifetime e-cigarette use was examined by sociodemographic characteristics. Separate logistic regression models predicted odds of ever-smoking e-cigarettes from race, citizenship status and language spoken at home. Sociodemographic characteristics were then added to these models as control variables and a model with all three predictors and controls was run. Similar models were run with conventional smoking as an outcome. 10.3% of adolescents ever used e-cigarettes. E-cigarette use was higher among ever-smokers of conventional cigarettes, individuals above 200% of the Federal Poverty Level, US citizens and those who spoke English-only at home. Multivariate analyses demonstrated that citizenship status and language spoken at home were associated with lifetime e-cigarette use, after accounting for control variables. Only citizenship status was associated with e-cigarette use, when controls variables race and language spoken at home were all in the same model. Ever use of e-cigarettes in this study was higher than previously reported national estimates. Action is needed to curb the use of e-cigarettes among adolescents. Differences in lifetime e-cigarette use by citizenship status and language spoken at home suggest that less acculturated individuals use e-cigarettes at lower rates. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
2015-05-01
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory
2008-12-01
Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.
Variation in Discourse Strategies in a Multilingual Context
ERIC Educational Resources Information Center
Bai, B. Lakshmi
2010-01-01
This paper is an attempt to study empirically a sample of spoken narratives of Hindi, Telugu and Dakkhini speakers in the multilingual setting of Hyderabad. After a brief account of multilingualism and variation within a language as commonly occurring phenomena, the paper examines the spoken narratives of the three languages mentioned above with a…
Spoken Grammar Practice and Feedback in an ASR-Based CALL System
ERIC Educational Resources Information Center
de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland
2015-01-01
Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…
ERIC Educational Resources Information Center
D'Mello, Sidney K.; Dowell, Nia; Graesser, Arthur
2011-01-01
There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning,…
On-Line Syntax: Thoughts on the Temporality of Spoken Language
ERIC Educational Resources Information Center
Auer, Peter
2009-01-01
One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…
A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome
ERIC Educational Resources Information Center
McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard
2016-01-01
Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…
ERIC Educational Resources Information Center
Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna
2013-01-01
Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…
Influences of Indigenous Language on Spatial Frames of Reference in Aboriginal English
ERIC Educational Resources Information Center
Edmonds-Wathen, Cris
2014-01-01
The Aboriginal English spoken by Indigenous children in remote communities in the Northern Territory of Australia is influenced by the home languages spoken by themselves and their families. This affects uses of spatial terms used in mathematics such as "in front" and "behind." Speakers of the endangered Indigenous Australian…
ERIC Educational Resources Information Center
Zimmer, Patricia Moore
2001-01-01
Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…
Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).
ERIC Educational Resources Information Center
Pisoni, David B.
This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…
ERIC Educational Resources Information Center
Konnerth, Linda Anna
2014-01-01
Karbi is a Tibeto-Burman (TB) language spoken by half a million people in the Karbi Anglong district in Assam, Northeast India, and surrounding areas in the extended Brahmaputra Valley area. It is an agglutinating, verb-final language. This dissertation offers a description of the dialect spoken in the hills of the Karbi Anglong district. It is…
L2 Gender Facilitation and Inhibition in Spoken Word Recognition
ERIC Educational Resources Information Center
Behney, Jennifer N.
2011-01-01
This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…
Mastrantuono, Eliana; Saldaña, David; Rodríguez-Ortiz, Isabel R.
2017-01-01
An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf (n = 25) and hearing (n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals – particularly native signers – mainly perceived signs through peripheral vision. PMID:28680416
Mastrantuono, Eliana; Saldaña, David; Rodríguez-Ortiz, Isabel R
2017-01-01
An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf ( n = 25) and hearing ( n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals - particularly native signers - mainly perceived signs through peripheral vision.
ERIC Educational Resources Information Center
Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John
2014-01-01
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…
ERIC Educational Resources Information Center
Bunta, Ferenc; Douglas, Michael; Dickson, Hanna; Cantu, Amy; Wickesberg, Jennifer; Gifford, René H.
2016-01-01
Background: There is a critical need to understand better speech and language development in bilingual children learning two spoken languages who use cochlear implants (CIs) and hearing aids (HAs). The paucity of knowledge in this area poses a significant barrier to providing maximal communicative outcomes to a growing number of children who have…
ERIC Educational Resources Information Center
Vanderplank, Robert N.
1980-01-01
An experiment was carried out at the University of Edinburgh to discover ways in which students might be helped to understand spoken language and to become more confident in their interactions in the language. As a result of the experiment findings, materials were designed to train students to perceive stress patterns, to internalize stress-timing…
The Production and Distribution of Burarra Talking Books
ERIC Educational Resources Information Center
Darcy, Rose; Auld, Glenn
2008-01-01
The use of ICTs to support literacy in a minority Indigenous Australian language is an important domain of pedagogy that is often overlooked by teachers in these contexts. The development of new technological configurations in remote communities can be highly supportive of Indigenous languages spoken by a small number of people. This paper reports…
ERIC Educational Resources Information Center
Hoffstaedter, Petra; Kohn, Kurt
2016-01-01
We report on a case study on pedagogical affordances of intercultural telecollaboration for authentic communication practice and competence development in the local foreign language. Focus is on spoken and written conversations involving pairs of secondary school pupils of different linguacultural backgrounds. Particular attention is given to…
Bilingual Virtual Reference: It's Better than Searching the Open Web
ERIC Educational Resources Information Center
Lupien, Pascal
2004-01-01
Online library services have been mostly available uniquely in English. To serve the diverse communities that uses library services, a growing number of libraries have been investigating the ins and outs of offering virtual reference in languages other than English. Developing this kind of service in languages spoken by various library user groups…
The Effects of Implicit Instruction on Implicit and Explicit Knowledge Development
ERIC Educational Resources Information Center
Godfroid, Aline
2016-01-01
This study extends the evidence for implicit second language (L2) learning, which comes largely from (semi-)artificial language research, to German. Upper-intermediate L2 German learners were flooded with spoken exemplars of a difficult morphological structure, namely strong, vowel-changing verbs. Toward the end of exposure, the mandatory vowel…
English for Tourism and Hospitality Purposes (ETP)
ERIC Educational Resources Information Center
Zahedpisheh, Nahid; Abu Bakar, Zulqarnain B.; Saffari, Narges
2017-01-01
The quick development of the tourism and hospitality industry can straightly influence the English language which is the most widely used and spoken language in international tourism in the twenty-first century. English for tourism has a major role in the delivery of quality service. Employees who work in the tourism and hospitality industry are…
Highlights on the History and Evolution of the Rumanian Language.
ERIC Educational Resources Information Center
Buzash, Michael D.
A brief history of modern Rumanian is chronicled, focusing on the influence of a variety of languages on Rumanian's development. Four regional variations are identified: Dacio-Rumanian, Macedo-Rumanian, Megleno-Rumanian, and Istro-Rumanian, all evolving from the Latin spoken in the corresponding areas beginning in imperial Roman times. The…
Learning to Look for Language: Development of Joint Attention in Young Deaf Children
ERIC Educational Resources Information Center
Lieberman, Amy M.; Hatrak, Marla; Mayberry, Rachel I.
2014-01-01
Joint attention between hearing children and their caregivers is typically achieved when the adult provides spoken, auditory linguistic input that relates to the child's current visual focus of attention. Deaf children interacting through sign language must learn to continually switch visual attention between people and objects in order to achieve…
Reading for Pleasure: More than Just a Distant Possibility?
ERIC Educational Resources Information Center
Barber, Karen Slikas
2014-01-01
Much has been written about the importance of extensive reading for the development of language fluency, yet it is not often an activity of choice by students as a means of improving language learning. Many of my multi-level (elementary-intermediate) Adult Migrant English Program (AMEP) Certificates in Spoken and Written English (CSWE) students…
Listen, Listen, Listen and Listen: Building a Comprehension Corpus and Making It Comprehensible
ERIC Educational Resources Information Center
Mordaunt, Owen G.; Olson, Daniel W.
2010-01-01
Listening comprehension input is necessary for language learning and acculturation. One approach to developing listening comprehension skills is through exposure to massive amounts of naturally occurring spoken language input. But exposure to this input is not enough; learners also need to make the comprehension corpus meaningful to their learning…
The Comprehension and Production of Wh-Questions in Deaf and Hard-of-Hearing Children
ERIC Educational Resources Information Center
Friedmann, Naama; Szterman, Ronit
2011-01-01
Hearing loss during the critical period for language acquisition restricts spoken language input. This input limitation, in turn, may hamper syntactic development. This study examined the comprehension, production, and repetition of Wh-questions in deaf or hard-of-hearing (DHH) children. The participants were 11 orally trained Hebrew-speaking…
Applications of Text Analysis Tools for Spoken Response Grading
ERIC Educational Resources Information Center
Crossley, Scott; McNamara, Danielle
2013-01-01
This study explores the potential for automated indices related to speech delivery, language use, and topic development to model human judgments of TOEFL speaking proficiency in second language (L2) speech samples. For this study, 244 transcribed TOEFL speech samples taken from 244 L2 learners were analyzed using automated indices taken from…
The Seldom-Spoken Roots of the Curriculum: Romanticism and the New Literacy.
ERIC Educational Resources Information Center
Willinsky, John M.
1987-01-01
In language education, several recent curricular developments from expressive writing to interactional reading share a common core of assumptions rooted in British Romanticism. This article compares central tenets of the New Literacy and Romanticism, focusing on the former's reconceptualization of the teacher, the student, and the language arts…
Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study
ERIC Educational Resources Information Center
Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua
2012-01-01
Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…
Cochlear implants and spoken language processing abilities: review and assessment of the literature.
Peterson, Nathaniel R; Pisoni, David B; Miyamoto, Richard T
2010-01-01
Cochlear implants (CIs) process sounds electronically and then transmit electric stimulation to the cochlea of individuals with sensorineural deafness, restoring some sensation of auditory perception. Many congenitally deaf CI recipients achieve a high degree of accuracy in speech perception and develop near-normal language skills. Post-lingually deafened implant recipients often regain the ability to understand and use spoken language with or without the aid of visual input (i.e. lip reading). However, there is wide variation in individual outcomes following cochlear implantation, and some CI recipients never develop useable speech and oral language skills. The causes of this enormous variation in outcomes are only partly understood at the present time. The variables most strongly associated with language outcomes are age at implantation and mode of communication in rehabilitation. Thus, some of the more important factors determining success of cochlear implantation are broadly related to neural plasticity that appears to be transiently present in deaf individuals. In this article we review the expected outcomes of cochlear implantation, potential predictors of those outcomes, the basic science regarding critical and sensitive periods, and several new research directions in the field of cochlear implantation.
Kliewer, C
1995-06-01
Interactive and literacy-based language use of young children within the context of an inclusive preschool classroom was explored. An interpretivist framework and qualitative research methods, including participant observation, were used to examine and analyze language in five preschool classes that were composed of children with and without disabilities. Children's language use included spoken, written, signed, and typed. Results showed complex communicative and literacy language use on the part of young children outside conventional adult perspectives. Also, children who used expressive methods other than speech were often left out of the contexts where spoken language was richest and most complex.
Using Unscripted Spoken Texts in the Teaching of Second Language Listening
ERIC Educational Resources Information Center
Wagner, Elvis
2014-01-01
Most spoken texts that are used in second language (L2) listening classroom activities are scripted texts, where the text is written, revised, polished, and then read aloud with artificially clear enunciation and slow rate of speech. This article explores the field's overreliance on these scripted texts, at the expense of including unscripted…
Revisiting Debates on Oracy: Classroom Talk--Moving towards a Democratic Pedagogy?
ERIC Educational Resources Information Center
Coultas, Valerie
2015-01-01
This article uses documentary evidence to review debates on spoken language and learning in the UK over recent decades. It argues that two different models of talk have been at stake: one that wishes to "correct" children's spoken language and another than encourages children to use talk to learn and represent their worlds. The article…
The Contribution of the Inferior Parietal Cortex to Spoken Language Production
ERIC Educational Resources Information Center
Geranmayeh, Fatemeh; Brownsett, Sonia L. E.; Leech, Robert; Beckmann, Christian F.; Woodhead, Zoe; Wise, Richard J. S.
2012-01-01
This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with…
ERIC Educational Resources Information Center
Frimberger, Katja
2016-01-01
This article explores the author's embodied experience of linguistic incompetence in the context of an interview-based, short, promotional film production about people's personal connections to their spoken languages in Glasgow, Scotland/UK. The article highlights that people's right to their spoken languages during film interviews and the…
Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language
ERIC Educational Resources Information Center
Nicholas, Johanna G.; Geers, Ann E.
2008-01-01
Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…
A Race to Rescue Native Tongues
ERIC Educational Resources Information Center
Ashburn, Elyse
2007-01-01
Of the 300 or so native languages once spoken in North America, only about 150 are still spoken--and the majority of those have just a handful of mostly elderly speakers. For most Native American languages, colleges and universities are their last great hope, if not their final resting place. People at a number of institutions across the country…
Guidelines for Evaluating Auditory-Oral Programs for Children Who Are Hearing Impaired.
ERIC Educational Resources Information Center
Alexander Graham Bell Association for the Deaf, Inc., Washington, DC.
These guidelines are intended to assist parents in evaluating educational programs for children who are hearing impaired, where a program's stated intention is promoting the child's optimal use of spoken language as a mode of everyday communication and learning. The guidelines are applicable to programs where spoken language is the sole mode or…
Beyond Rhyme or Reason: ERPs Reveal Task-Specific Activation of Orthography on Spoken Language
ERIC Educational Resources Information Center
Pattamadilok, Chotiga; Perre, Laetitia; Ziegler, Johannes C.
2011-01-01
Metaphonological tasks, such as rhyme judgment, have been the primary tool for the investigation of the effects of orthographic knowledge on spoken language. However, it has been recently argued that the orthography effect in rhyme judgment does not reflect the automatic activation of orthographic codes but rather stems from sophisticated response…
Effects of Tasks on Spoken Interaction and Motivation in English Language Learners
ERIC Educational Resources Information Center
Carrero Pérez, Nubia Patricia
2016-01-01
Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…
ERIC Educational Resources Information Center
Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.
2012-01-01
This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…
Geytenbeek, Joke J M; Vermeulen, R Jeroen; Becher, Jules G; Oostrom, Kim J
2015-03-01
To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic (46%) CP (Gross Motor Function Classification System [GMFCS] levels IV [39%] and V [61%]) underwent spoken language comprehension assessment with the computer-based instrument for low motor language testing (C-BiLLT), a new and validated diagnostic instrument. A multiple linear regression model was used to investigate which variables explained the variation in C-BiLLT scores. Associations between spoken language comprehension abilities (expressed in z-score or age-equivalent score) and motor type of CP, GMFCS and Manual Ability Classification System (MACS) levels, gestational age, and epilepsy were analysed with Fisher's exact test. A p-value <0.05 was considered statistically significant. Chronological age, motor type, and GMFCS classification explained 33% (R=0.577, R(2) =0.33) of the variance in spoken language comprehension. Of the children aged younger than 6 years 6 months, 52.4% of the children with dyskinetic CP attained comprehension scores within the average range (z-score ≥-1.6) as opposed to none of the children with spastic CP. Of the children aged older than 6 years 6 months, 32% of the children with dyskinetic CP reached the highest achievable age-equivalent score compared to 4% of the children with spastic CP. No significant difference in disability was found between CP-related variables (MACS levels, gestational age, epilepsy), with the exception of GMFCS which showed a significant difference in children aged younger than 6 years 6 months (p=0.043). Despite communication disabilities in children with severe CP, particularly in dyskinetic CP, spoken language comprehension may show no or only moderate delay. These findings emphasize the importance of introducing alternative and/or augmentative communication devices from early childhood. © 2014 Mac Keith Press.
Low self-concept in poor readers: prevalence, heterogeneity, and risk.
McArthur, Genevieve; Castles, Anne; Kohnen, Saskia; Banales, Erin
2016-01-01
There is evidence that poor readers are at increased risk for various types of low self-concept-particularly academic self-concept. However, this evidence ignores the heterogeneous nature of poor readers, and hence the likelihood that not all poor readers have low self-concept. The aim of this study was to better understand which types of poor readers have low self-concept. We tested 77 children with poor reading for their age for four types of self-concept, four types of reading, three types of spoken language, and two types of attention. We found that poor readers with poor attention had low academic self-concept, while poor readers with poor spoken language had low general self-concept in addition to low academic self-concept. In contrast, poor readers with typical spoken language and attention did not have low self-concept of any type. We also discovered that academic self-concept was reliably associated with reading and receptive spoken vocabulary, and that general self-concept was reliably associated with spoken vocabulary. These outcomes suggest that poor readers with multiple impairments in reading, language, and attention are at higher risk for low academic and general self-concept, and hence need to be assessed for self-concept in clinical practice. Our results also highlight the need for further investigation into the heterogeneous nature of self-concept in poor readers.
Low self-concept in poor readers: prevalence, heterogeneity, and risk
Castles, Anne; Kohnen, Saskia; Banales, Erin
2016-01-01
There is evidence that poor readers are at increased risk for various types of low self-concept—particularly academic self-concept. However, this evidence ignores the heterogeneous nature of poor readers, and hence the likelihood that not all poor readers have low self-concept. The aim of this study was to better understand which types of poor readers have low self-concept. We tested 77 children with poor reading for their age for four types of self-concept, four types of reading, three types of spoken language, and two types of attention. We found that poor readers with poor attention had low academic self-concept, while poor readers with poor spoken language had low general self-concept in addition to low academic self-concept. In contrast, poor readers with typical spoken language and attention did not have low self-concept of any type. We also discovered that academic self-concept was reliably associated with reading and receptive spoken vocabulary, and that general self-concept was reliably associated with spoken vocabulary. These outcomes suggest that poor readers with multiple impairments in reading, language, and attention are at higher risk for low academic and general self-concept, and hence need to be assessed for self-concept in clinical practice. Our results also highlight the need for further investigation into the heterogeneous nature of self-concept in poor readers. PMID:27867764
Selective auditory attention in adults: effects of rhythmic structure of the competing language.
Reel, Leigh Ann; Hicks, Candace Bourland
2012-02-01
The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Reception thresholds for English sentences were measured for 50 monolingual English-speaking adults in conditions with 2-talker (male-female) competing speech spoken in a stress-based (English, German), syllable-based (Spanish, French), or mora-based (Japanese) language. Two different masking signals were created for each language (i.e., 2 different 2-talker pairs). All subjects were tested in 10 competing conditions (2 conditions for each of the 5 languages). A significant difference was noted between the 2 masking signals within each language. Across languages, significantly greater listening difficulty was observed in conditions where competing speech was spoken in English, German, or Japanese, as compared with Spanish or French. Results suggest that (a) for a particular language, masking effectiveness can vary between different male-female 2-talker maskers and (b) for stress-based vs. syllable-based languages, competing speech is more difficult to ignore when spoken in a language from the native rhythmic class as compared with a nonnative rhythmic class, regardless of whether the language is familiar or unfamiliar to the listener.
Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih
2017-07-19
It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.
Developing Operationally-Proficient Linguists: It’s About Time
2011-03-14
and Russian (0.3%).20 Third, following World War II, the United States emerged as the world’s economic power and English became the lingua...Defense Strategic Language List.58 Currently, NLFI languages include Arabic, Chinese, Hindi, Urdu, Korean, Persian, and Russian and African...Chinese, Arabic, Vietnamese, Korean, Russian , Polish, is spoken, then that person would be a heritage speaker of that language.” Olga Kagan, “What is
Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech
NASA Astrophysics Data System (ADS)
Furui, Sadaoki
This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.
The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing
Gow, David W.
2012-01-01
Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237
Relationship between affect and achievement in science and mathematics in Malaysia and Singapore
NASA Astrophysics Data System (ADS)
Thoe Ng, Khar; Fah Lay, Yoon; Areepattamannil, Shaljan; Treagust, David F.; Chandrasegaran, A. L.
2012-11-01
Background : The Trends in International Mathematics and Science Study (TIMSS) assesses the quality of the teaching and learning of science and mathematics among Grades 4 and 8 students across participating countries. Purpose : This study explored the relationship between positive affect towards science and mathematics and achievement in science and mathematics among Malaysian and Singaporean Grade 8 students. Sample : In total, 4466 Malaysia students and 4599 Singaporean students from Grade 8 who participated in TIMSS 2007 were involved in this study. Design and method : Students' achievement scores on eight items in the survey instrument that were reported in TIMSS 2007 were used as the dependent variable in the analysis. Students' scores on four items in the TIMSS 2007 survey instrument pertaining to students' affect towards science and mathematics together with students' gender, language spoken at home and parental education were used as the independent variables. Results : Positive affect towards science and mathematics indicated statistically significant predictive effects on achievement in the two subjects for both Malaysian and Singaporean Grade 8 students. There were statistically significant predictive effects on mathematics achievement for the students' gender, language spoken at home and parental education for both Malaysian and Singaporean students, with R 2 = 0.18 and 0.21, respectively. However, only parental education showed statistically significant predictive effects on science achievement for both countries. For Singapore, language spoken at home also demonstrated statistically significant predictive effects on science achievement, whereas gender did not. For Malaysia, neither gender nor language spoken at home had statistically significant predictive effects on science achievement. Conclusions : It is important for educators to consider implementing self-concept enhancement intervention programmes by incorporating 'affect' components of academic self-concept in order to develop students' talents and promote academic excellence in science and mathematics.
Is Language a Barrier to the Use of Preventive Services?
Woloshin, Steven; Schwartz, Lisa M; Katz, Steven J; Welch, H Gilbert
1997-01-01
OBJECTIVE To isolate the effect of spoken language from financial barriers to care, we examined the relation of language to use of preventive services in a system with universal access. DESIGN Cross-sectional survey. SETTING Household population of women living in Ontario, Canada, in 1990. PARTICIPANTS Subjects were 22,448 women completing the 1990 Ontario Health Survey, a population-based random sample of households. MEASUREMENTS AND MAIN RESULTS We defined language as the language spoken in the home and assessed self-reported receipt of breast examination, mammogram and Pap testing. We used logistic regression to calculate odds ratios for each service adjusting for potential sources of confounding: socioeconomic characteristics, contact with the health care system, and measures reflecting culture. Ten percent of the women spoke a non-English language at home (4% French, 6% other). After adjustment, compared with English speakers, French-speaking women were significantly less likely to receive breast exams or mammography, and other language speakers were less likely to receive Pap testing. CONCLUSIONS Women whose main spoken language was not English were less likely to receive important preventive services. Improving communication with patients with limited English may enhance participation in screening programs. PMID:9276652
Øhre, Beate; Volden, Maj; Falkum, Erik; von Tetzchner, Stephen
2017-01-01
Deaf and hard of hearing (DHH) individuals who use signed language and those who use spoken language face different challenges and stressors. Accordingly, the profile of their mental problems may also differ. However, studies of mental disorders in this population have seldom differentiated between linguistic groups. Our study compares demographics, mental disorders, and levels of distress and functioning in 40 patients using Norwegian Sign Language (NSL) and 36 patients using spoken language. Assessment instruments were translated into NSL. More signers were deaf than hard of hearing, did not share a common language with their childhood caregivers, and had attended schools for DHH children. More Norwegian-speaking than signing patients reported medical comorbidity, whereas the distribution of mental disorders, symptoms of anxiety and depression, and daily functioning did not differ significantly. Somatic complaints and greater perceived social isolation indicate higher stress levels in DHH patients using spoken language than in those using sign language. Therefore, preventive interventions are necessary, as well as larger epidemiological and clinical studies concerning the mental health of all language groups within the DHH population. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension
Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.
2016-01-01
The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974
Multimedia Modular Approach for Augmenting the Speaking Skill of the Student-Teachers
ERIC Educational Resources Information Center
Jose, G. Rexlin; Raja, B. William Dharma
2012-01-01
Language is the most important instrument for communication. It enables and facilitates both the speaker and the listener to exchange their thoughts and feelings. It is the basis for social, cultural, aesthetic, spiritual and economic development and growth of every human being. Unless the spoken language is free from errors and barriers, it can…
The Influence of Gujarati and Tamil L1s on Indian English: A Preliminary Study
ERIC Educational Resources Information Center
Wiltshire, Caroline R.; Harnsberger, James D.
2006-01-01
English as spoken as a second language in India has developed distinct sound patterns in terms of both segmental and prosodic characteristics. We investigate the differences between two groups varying in native language (Gujarati, Tamil) to evaluate to what extent Indian English (IE) accents are based on a single target phonological-phonetic…
Fundamentally Speaking: A Focus on English as a Second Language (ESL).
ERIC Educational Resources Information Center
Winfield, Marie Yolette
The techniques used in teaching spoken and written English must be re-evaluated, and more effort should be directed toward helping individuals develop adequate ways of acquiring language skills. Classroom models that confront students with texts they cannot read effectively, and that compel them to repeat words and sentences in a chorus should be…
Deviant ERP Response to Spoken Non-Words among Adolescents Exposed to Cocaine in Utero
ERIC Educational Resources Information Center
Landi, Nicole; Crowley, Michael J.; Wu, Jia; Bailey, Christopher A.; Mayes, Linda C.
2012-01-01
Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of…
Invariance Detection within an Interactive System: A Perceptual Gateway to Language Development
ERIC Educational Resources Information Center
Gogate, Lakshmi J.; Hollich, George
2010-01-01
In this article, we hypothesize that "invariance detection," a general perceptual phenomenon whereby organisms attend to relatively stable patterns or regularities, is an important means by which infants tune in to various aspects of spoken language. In so doing, we synthesize a substantial body of research on detection of regularities across the…
A Comparison of Students' Performances Using Audio Only and Video Media Methods
ERIC Educational Resources Information Center
Sulaiman, Norazean; Muhammad, Ahmad Mazli; Ganapathy, Nurul Nadiah Dewi Faizul; Khairuddin, Zulaikha; Othman, Salwa
2017-01-01
Listening is a very crucial skill to be learnt in second language classroom because it is essential for the development of spoken language proficiency (Hamouda, 2013). The aim of this study is to investigate the significant differences in terms of students' performance when using traditional (audio-only) method and video media method. The data of…
ERIC Educational Resources Information Center
Dammeyer, Jesper
2010-01-01
Research has shown a prevalence of psychosocial difficulties ranging from about 20% to 50% among children with hearing loss. This study evaluates the prevalence of psychosocial difficulties in a Danish population in relation to different explanatory variables. Five scales and questionnaires measuring sign language, spoken language, hearing…
ERIC Educational Resources Information Center
Hogan, Sarah; Stokes, Jacqueline; White, Catherine; Tyszkiewicz, Elizabeth; Woolgar, Alexandra
2008-01-01
Providing unbiased data concerning the outcomes of particular intervention methods is imperative if professionals and parents are to assimilate information which could contribute to an "informed choice". An evaluation of Auditory Verbal Therapy (AVT) was conducted using a formal assessment of spoken language as an outcome measure. Spoken…
Detailed Phonetic Labeling of Multi-language Database for Spoken Language Processing Applications
2015-03-01
which contains about 60 interfering speakers as well as background music in a bar. The top panel is again clean training /noisy testing settings, and...recognition system for Mandarin was developed and tested. Character recognition rates as high as 88% were obtained, using an approximately 40 training ...Tool_ComputeFeat.m) .............................................................................................................. 50 6.3. Training
ERIC Educational Resources Information Center
Montgomery, James W.; Gillam, Ronald B.; Evans, Julia L.
2016-01-01
Purpose: Compared with same-age typically developing peers, school-age children with specific language impairment (SLI) exhibit significant deficits in spoken sentence comprehension. They also demonstrate a range of memory limitations. Whether these 2 deficit areas are related is unclear. The present review article aims to (a) review 2 main…
Multilingualism and Assimilationism in Australia's Literacy-Related Educational Policies
ERIC Educational Resources Information Center
Schalley, Andrea C.; Guillemin, Diana; Eisenchlas, Susana A.
2015-01-01
Australia is a country of high linguistic diversity, with more than 300 languages spoken. Today, 19% of the population aged over 5 years speak a language other than English at home. Against this background, we examine government policies and prominent initiatives developed at national level in the past 30 years to address the challenge of offering…
Evaluating the spoken English proficiency of graduates of foreign medical schools.
Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E
2001-08-01
The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.
ERIC Educational Resources Information Center
Li, Xiao-qing; Ren, Gui-qin
2012-01-01
An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…
An Analysis of a Language Test for Employment: The Authenticity of the PhonePass Test
ERIC Educational Resources Information Center
Chun, Christian W.
2006-01-01
This article presents an analysis of Ordinate Corporation's PhonePass Spoken English Test-10. The company promotes this product as being a useful assessment tool for screening job candidates' ability in spoken English. In the real-life domain of the work environment, one of the primary target language use tasks involves extended production…
ERIC Educational Resources Information Center
Crowe, Kathryn; McLeod, Sharynne; McKinnon, David H.; Ching, Teresa Y. C.
2014-01-01
Purpose: The authors sought to investigate the influence of a comprehensive range of factors on the decision making of caregivers of children with hearing loss regarding the use of speech, the use of sign, spoken language multilingualism, and spoken language choice. This is a companion article to the qualitative investigation described in Crowe,…
Accelerating Receptive Language Acquisition in Kindergarten Students: An Action Research Study
ERIC Educational Resources Information Center
Hewitt, Christine L.
2013-01-01
Receptive language skills allow students to understand the meaning of words spoken to them. When students are unable to comprehend the majority of the words that are spoken to them, they do not have the ability to act on those words, follow given directions, build on prior knowledge, or construct adequate meaning. The inability to understand the…
Taha, Haitham
2017-06-01
The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and phonologically similar version (PS). The result showed that for immediate free-recall, the performances were better for the SL and the PS conditions compared to the SA one. However, for the parts of delayed recall and recognition, the results did not reveal any significant consistent effect of diglossia. Accordingly, it was suggested that diglossia has a significant effect on the storage and short term memory functions but not on long term memory functions. The results were discussed in light of different approaches in the field of bilingual memory.
ERIC Educational Resources Information Center
McCartney, Elspeth; Boyle, James; Ellis, Sue
2015-01-01
Background: Some children in areas of social deprivation in Scotland have lower reading attainment than neighbouring children in less deprived areas, and some of these also have lower spoken language comprehension skills than expected by assessment norms. There is a need to develop effective reading comprehension interventions that fit easily into…
Langereis, Margreet; Vermeulen, Anneke
2015-06-01
This study aimed to evaluate the long term effects of CI on auditory, language, educational and social-emotional development of deaf children in different educational-communicative settings. The outcomes of 58 children with profound hearing loss and normal non-verbal cognition, after 60 months of CI use have been analyzed. At testing the children were enrolled in three different educational settings; in mainstream education, where spoken language is used or in hard-of-hearing education where sign supported spoken language is used and in bilingual deaf education, with Sign Language of the Netherlands and Sign Supported Dutch. Children were assessed on auditory speech perception, receptive language, educational attainment and wellbeing. Auditory speech perception of children with CI in mainstream education enable them to acquire language and educational levels that are comparable to those of their normal hearing peers. Although the children in mainstream and hard-of-hearing settings show similar speech perception abilities, language development in children in hard-of-hearing settings lags significantly behind. Speech perception, language and educational attainments of children in deaf education remained extremely poor. Furthermore more children in mainstream and hard-of-hearing environments are resilient than in deaf educational settings. Regression analyses showed an important influence of educational setting. Children with CI who are placed in early intervention environments that facilitate auditory development are able to achieve good auditory speech perception, language and educational levels on the long term. Most parents of these children report no social-emotional concerns. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Novel Spoken Word Learning in Adults with Developmental Dyslexia
ERIC Educational Resources Information Center
Conner, Peggy S.
2013-01-01
A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…
Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar
2015-12-01
The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Houston, Derek M.; Bergeson, Tonya R.
2013-01-01
The advent of cochlear implantation has provided thousands of deaf infants and children access to speech and the opportunity to learn spoken language. Whether or not deaf infants successfully learn spoken language after implantation may depend in part on the extent to which they listen to speech rather than just hear it. We explore this question by examining the role that attention to speech plays in early language development according to a prominent model of infant speech perception – Jusczyk’s WRAPSA model – and by reviewing the kinds of speech input that maintains normal-hearing infants’ attention. We then review recent findings suggesting that cochlear-implanted infants’ attention to speech is reduced compared to normal-hearing infants and that speech input to these infants differs from input to infants with normal hearing. Finally, we discuss possible roles attention to speech may play on deaf children’s language acquisition after cochlear implantation in light of these findings and predictions from Jusczyk’s WRAPSA model. PMID:24729634
Win-win: advancing written language knowledge and practice through university clinics.
Katz, Lauren A; Fallon, Karen A
2015-02-01
Speech-language pathologists (SLPs) are uniquely suited for assessing and treating individuals with both spoken and written language disorders. Yet as students move from the elementary grades into the middle and high school grades, SLPs tend to provide fewer direct language services to them. Although spoken language disorders become written language disorders, SLP are not receiving sufficient training in the area of written language, and this is reflected in the extent to which they believe they have the knowledge and skills to provide services to struggling readers and writers on their caseloads. In this article, we discuss these problems and present effective methods for addressing them. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
ERIC Educational Resources Information Center
Arndt, Karen Barako; Schuele, C. Melanie
2013-01-01
Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…
Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition
Lupyan, Gary
2015-01-01
Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most “arbitrary” spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages. PMID:26340349
McCartney, Elspeth; Boyle, James; Ellis, Sue
2015-01-01
Some children in areas of social deprivation in Scotland have lower reading attainment than neighbouring children in less deprived areas, and some of these also have lower spoken language comprehension skills than expected by assessment norms. There is a need to develop effective reading comprehension interventions that fit easily into the school curriculum and can benefit all pupils. A feasibility study of reading comprehension strategies with existing evidence of efficacy was undertaken in three mainstream primary schools within an area of social deprivation in west central Scotland, to decide whether further investigation of this intervention was warranted. Aims were to measure comprehension of spoken language and reading via standardised assessments towards the beginning of the school year (T1) in mainstream primary school classrooms within an area of social deprivation; to have teachers introduce previously-validated text comprehension strategies, and to measure change in reading comprehension outcome measures towards the end of the year (T2). A pre- and post-intervention cohort design was used. Reading comprehension strategies were introduced to staff in participating schools and used throughout the school year as part of on-going reading instruction. Spoken language comprehension was measured by TROG-2 at T1, and reading progress by score changes from T1 to T2 on the WIAT-II(UK) -T reading comprehension scale. Forty-seven pupils in five classes in three primary schools took part: 38% had TROG-2 scores below the 10(th) centile. As a group, children made good reading comprehension progress, with a medium effect size of 0.46. Children with TROG-2 scores below the 10(th) centile had lower mean reading scores than others at T1 and T2, although with considerable overlap. However, TROG-2 did not make a unique contribution to reading progress: children below the 10(th) centile made as much progress as other children. The intervention was welcomed by schools, and the measure of reading comprehension proved responsive to change. The outcomes suggest the reading intervention may be effective for children with and without spoken language comprehension difficulties, and warrants further investigation in larger, controlled, studies. © 2014 Royal College of Speech and Language Therapists.
Language choice in bimodal bilingual development.
Lillo-Martin, Diane; de Quadros, Ronice M; Chen Pichler, Deborah; Fieldsteel, Zoe
2014-01-01
Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending-expressions in both speech and sign simultaneously-an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult. Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant community language.
Language choice in bimodal bilingual development
Lillo-Martin, Diane; de Quadros, Ronice M.; Chen Pichler, Deborah; Fieldsteel, Zoe
2014-01-01
Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending—expressions in both speech and sign simultaneously—an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult. Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant community language. PMID:25368591
Grammar of Kove: An Austronesian Language of the West New Britain Province, Papua New Guinea
ERIC Educational Resources Information Center
Sato, Hiroko
2013-01-01
This dissertation is a descriptive grammar of Kove, an Austronesian language spoken in the West New Britain Province of Papua New Guinea. Kove is primarily spoken in 18 villages, including some on the small islands north of New Britain. There are about 9,000 people living in the area, but many are not fluent speakers of Kove. The dissertation…
A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition
ERIC Educational Resources Information Center
Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
2015-01-01
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Mills, Brian; Lai, Janie; Brown, Timothy T.; Erhart, Matthew; Halgren, Eric; Reilly, Judy; Dale, Anders; Appelbaum, Mark; Moses, Pamela
2013-01-01
This study investigated the relationship between white matter microstructure and the development of morphosyntax in a spoken narrative in typically developing children (TD) and in children with high functioning autism (HFA). Autism is characterized by language and communication impairments, yet the relationship between morphosyntactic development in spontaneous discourse contexts and neural development is not well understood in either this population or typical development. Diffusion tensor imaging (DTI) was used to assess multiple parameters of diffusivity as indicators of white matter tract integrity in language-related tracts in children between 6 and 13 years of age. Children were asked to spontaneously tell a story about at time when someone made them sad, mad, or angry. The story was evaluated for morphological accuracy and syntactic complexity. Analysis of the relationship between white matter microstructure and language performance in TD children showed that diffusivity correlated with morphosyntax production in the superior longitudinal fasciculus (SLF), a fiber tract traditionally associated with language. At the anatomical level, the HFA group showed abnormal diffusivity in the right inferior longitudinal fasciculus (ILF) relative to the TD group. Within the HFA group, children with greater white matter integrity in the right ILF displayed greater morphological accuracy during their spoken narrative. Overall, the current study shows an association between white matter structure in a traditional language pathway and narrative performance in TD children. In the autism group, associations were only found in the ILF, suggesting that during real world language use, children with HFA rely less on typical pathways and instead rely on alternative ventral pathways that possibly mediate visual elements of language. PMID:23810972
ERIC Educational Resources Information Center
Chen, Pei-Hua; Liu, Ting-Wei
2017-01-01
Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…
ERIC Educational Resources Information Center
Houston, K. Todd
2010-01-01
Since 1946, Utah State University (USU) has offered specialized coursework in audiology and speech-language pathology, awarding the first graduate degrees in 1948. In 1965, the teacher training program in deaf education was launched. Over the years, the Department of Communicative Disorders and Deaf Education (COMD-DE) has developed a rich history…
ERIC Educational Resources Information Center
Yoon, Sae Yeol
2012-01-01
The purpose of this study was to explore the development of students' understanding through writing while immersed in an environment where there was a strong emphasis on a language-based argument inquiry approach. Additionally, this study explored students' spoken discourse to gain a better understanding of what role(s) talking plays in…
d/Deaf and Hard of Hearing Multilingual Learners: The Development of Communication and Language
ERIC Educational Resources Information Center
Pizzo, Lianna
2016-01-01
The author examines the theory and research relevant to educating d/Deaf and Hard of Hearing Multilingual Learners (DMLs). There is minimal research on this population, yet a synthesis of related theory, research, and practice on spoken-language bilinguals can be used to add to the body of knowledge on these learners. Specifically, the author…
Persuasive Talk in Social Contexts: Development, Assessment, and Intervention.
ERIC Educational Resources Information Center
Nippold, Marilyn A.
1994-01-01
This article reviews the developmental literature in spoken persuasion and discusses implications for assessment and intervention with students with language-learning disorders, in terms of persuading others, analyzing persuasive appeals, and responding to persuasive appeals. (JDD)
ERIC Educational Resources Information Center
Kretschmer, Richard R.; Kretschmer, Laura; Kuwahara, Katsura; Truax, Roberta
2010-01-01
This study described the communication and spoken language development of a Japanese girl with profound hearing loss who used a cochlear implant from 19 months of age. The girl, Akiko, was born in Belgium where her family was living at that time. After she was identified as deaf at birth, she and her parents were provided with support services.…
Marchman, Virginia A; Loi, Elizabeth C; Adams, Katherine A; Ashland, Melanie; Fernald, Anne; Feldman, Heidi M
2018-04-01
Identifying which preterm (PT) children are at increased risk of language and learning differences increases opportunities for participation in interventions that improve outcomes. Speed in spoken language comprehension at early stages of language development requires information processing skills that may form the foundation for later language and school-relevant skills. In children born full-term, speed of comprehending words in an eye-tracking task at 2 years old predicted language and nonverbal cognition at 8 years old. Here, we explore the extent to which speed of language comprehension at 1.5 years old predicts both verbal and nonverbal outcomes at 4.5 years old in children born PT. Participants were children born PT (n = 47; ≤32 weeks gestation). Children were tested in the "looking-while-listening" task at 18 months old, adjusted for prematurity, to generate a measure of speed of language comprehension. Parent report and direct assessments of language were also administered. Children were later retested on a test battery of school-relevant skills at 4.5 years old. Speed of language comprehension at 18 months old predicted significant unique variance (12%-31%) in receptive vocabulary, global language abilities, and nonverbal intelligence quotient (IQ) at 4.5 years, controlling for socioeconomic status, gestational age, and medical complications of PT birth. Speed of language comprehension remained uniquely predictive (5%-12%) when also controlling for children's language skills at 18 months old. Individual differences in speed of spoken language comprehension may serve as a marker for neuropsychological processes that are critical for the development of school-relevant linguistic skills and nonverbal IQ in children born PT.
SyllabO+: A new tool to study sublexical phenomena in spoken Quebec French.
Bédard, Pascale; Audet, Anne-Marie; Drouin, Patrick; Roy, Johanna-Pascale; Rivard, Julie; Tremblay, Pascale
2017-10-01
Sublexical phonotactic regularities in language have a major impact on language development, as well as on speech processing and production throughout the entire lifespan. To understand the impact of phonotactic regularities on speech and language functions at the behavioral and neural levels, it is essential to have access to oral language corpora to study these complex phenomena in different languages. Yet, probably because of their complexity, oral language corpora remain less common than written language corpora. This article presents the first corpus and database of spoken Quebec French syllables and phones: SyllabO+. This corpus contains phonetic transcriptions of over 300,000 syllables (over 690,000 phones) extracted from recordings of 184 healthy adult native Quebec French speakers, ranging in age from 20 to 97 years. To ensure the representativeness of the corpus, these recordings were made in both formal and familiar communication contexts. Phonotactic distributional statistics (e.g., syllable and co-occurrence frequencies, percentages, percentile ranks, transition probabilities, and pointwise mutual information) were computed from the corpus. An open-access online application to search the database was developed, and is available at www.speechneurolab.ca/syllabo . In this article, we present a brief overview of the corpus, as well as the syllable and phone databases, and we discuss their practical applications in various fields of research, including cognitive neuroscience, psycholinguistics, neurolinguistics, experimental psychology, phonetics, and phonology. Nonacademic practical applications are also discussed, including uses in speech-language pathology.
Meinzen-Derr, Jareen; Wiley, Susan; McAuley, Rose; Smith, Laura; Grether, Sandra
2017-11-01
Pilot study to assess the effect of augmentative and alternative communication technology to enhance language development in children who are deaf or hard-of-hearing. Five children ages 5-10 years with permanent bilateral hearing loss who were identified with language underperformance participated in an individualized 24-week structured program using the application TouchChat WordPower on iPads ® . Language samples were analyzed for changes in mean length of utterance, vocabulary words and mean turn length. Repeated measures models assessed change over time. The baseline median mean length of utterance was 2.41 (range 1.09-6.63; mean 2.88) and significantly increased over time (p = 0.002) to a median of 3.68 at final visit (range 1.97-6.81; mean 3.62). At baseline, the median total number of words spoken per language sample was 251 (range 101-458), with 100 (range 36-100) different words spoken. Total words and different words significantly increased over time (β = 26.8 (7.1), p = 0.001 for total words; β = 8.0 (2.7), p = 0.008 for different words). Mean turn length values also slightly increased over time. Using augmentative and alternative communication technology on iPads ® shows promise in supporting rapid language growth among elementary school-age children who are deaf or hard-of-hearing with language underperformance.
Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora
Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.
2012-01-01
Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980
Development of brain networks involved in spoken word processing of Mandarin Chinese.
Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R
2011-08-01
Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.
Verbal redundancy aids memory for filmed entertainment dialogue.
Hinkin, Michael P; Harris, Richard J; Miranda, Andrew T
2014-01-01
Three studies investigated the effects of presentation modality and redundancy of verbal content on recognition memory for entertainment film dialogue. U.S. participants watched two brief movie clips and afterward answered multiple-choice questions about information from the dialogue. Experiment 1 compared recognition memory for spoken dialogue in the native language (English) with subtitles in English, French, or no subtitles. Experiment 2 compared memory for material in English subtitles with spoken dialogue in English, French, or no sound. Experiment 3 examined three control conditions with no spoken or captioned material in the native language. All participants watched the same video clips and answered the same questions. Performance was consistently good whenever English dialogue appeared in either the subtitles or sound, and best of all when it appeared in both, supporting the facilitation of verbal redundancy. Performance was also better when English was only in the subtitles than when it was only spoken. Unexpectedly, sound or subtitles in an unfamiliar language (French) modestly improved performance, as long as there was also a familiar channel. Results extend multimedia research on verbal redundancy for expository material to verbal information in entertainment media.
Lesion localization of speech comprehension deficits in chronic aphasia
Binder, Jeffrey R.; Humphries, Colin; Gross, William L.; Book, Diane S.
2017-01-01
Objective: Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Methods: Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. Results: ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Conclusions: Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. PMID:28179469
Lesion localization of speech comprehension deficits in chronic aphasia.
Pillay, Sara B; Binder, Jeffrey R; Humphries, Colin; Gross, William L; Book, Diane S
2017-03-07
Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. © 2017 American Academy of Neurology.
The Bilingual Language Interaction Network for Comprehension of Speech*
Marian, Viorica
2013-01-01
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602
Semantic and phonological schema influence spoken word learning and overnight consolidation.
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
2018-06-01
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Schreibman, Laura; Stahmer, Aubyn C
2014-05-01
Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.
ERIC Educational Resources Information Center
Maldonado Torres, Sonia Enid
2016-01-01
The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…
ERIC Educational Resources Information Center
De Angelis, Gessica
2014-01-01
The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…
Moats, L C
1994-01-01
Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.
Flores, Glenn; Abreu, Milagros; Tomany-Korman, Sandra C
2005-01-01
Approximately 3.5 million U.S. schoolchildren are limited in English proficiency (LEP). Disparities in children's health and health care are associated with both LEP and speaking a language other than English at home, but prior research has not examined which of these two measures of language barriers is most useful in examining health care disparities. Our objectives were to compare primary language spoken at home vs. parental LEP and their associations with health status, access to care, and use of health services in children. We surveyed parents at urban community sites in Boston, asking 74 questions on children's health status, access to health care, and use of health services. Some 98% of the 1,100 participating children and families were of non-white race/ethnicity, 72% of parents were LEP, and 13 different primary languages were spoken at home. "Dose-response" relationships were observed between parental English proficiency and several child and parental sociodemographic features, including children's insurance coverage, parental educational attainment, citizenship and employment, and family income. Similar "dose-response" relationships were noted between the primary language spoken at home and many but not all of the same sociodemographic features. In multivariate analyses, LEP parents were associated with triple the odds of a child having fair/poor health status, double the odds of the child spending at least one day in bed for illness in the past year, and significantly greater odds of children not being brought in for needed medical care for six of nine access barriers to care. None of these findings were observed in analyses of the primary language spoken at home. Individual parental LEP categories were associated with different risks of adverse health status and outcomes. Parental LEP is superior to the primary language spoken at home as a measure of the impact of language barriers on children's health and health care. Individual parental LEP categories are associated with different risks of adverse outcomes in children's health and health care. Consistent data collection on parental English proficiency and referral of LEP parents to English classes by pediatric providers have the potential to contribute toward reduction and elimination of health care disparities for children of LEP parents.
Attentional Capture of Objects Referred to by Spoken Language
ERIC Educational Resources Information Center
Salverda, Anne Pier; Altmann, Gerry T. M.
2011-01-01
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
ERIC Educational Resources Information Center
Schwarz, Amy Louise; Guajardo, Jennifer; Hart, Rebecca
2017-01-01
Deaf and hard-of-hearing (DHH) literature suggests that there are different read-aloud goals for DHH prereaders based on the spoken and visual communication modes DHH prereaders use, such as: American Sign Language (ASL), simultaneously signed and spoken English (SimCom), and predominately spoken English only. To date, no studies have surveyed…
Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages
Perniss, Pamela; Thompson, Robin L.; Vigliocco, Gabriella
2010-01-01
Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor, perceptual, and affective experience. PMID:21833282
The gradual emergence of phonological form in a new language
Aronoff, Mark; Meir, Irit; Padden, Carol
2011-01-01
The division of linguistic structure into a meaningless (phonological) level and a meaningful level of morphemes and words is considered a basic design feature of human language. Although established sign languages, like spoken languages, have been shown to be characterized by this bifurcation, no information has been available about the way in which such structure arises. We report here on a newly emerging sign language, Al-Sayyid Bedouin Sign Language, which functions as a full language but in which a phonological level of structure has not yet emerged. Early indications of formal regularities provide clues to the way in which phonological structure may develop over time. PMID:22223927
ERIC Educational Resources Information Center
Hayes, Heather
2010-01-01
Developments in universal newborn hearing screening programs and assistive hearing technology have had considerable effects on the speech, language, and educational success of children who are deaf or hard of hearing. Several recent research studies of children who are deaf or hard of hearing and who use spoken language as their primary method of…
ERIC Educational Resources Information Center
Forteza Fernandez, Rafael Filiberto; Korneeva, Larisa I.
2017-01-01
Based on Selinker's hypothesis of five psycholinguistic processes shaping interlanguage (1972), the paper focuses attention on the Russian L2-learners' overreliance on the L1 as the main factor hindering their development. The research problem is, therefore, the high incidence of L1 transfer in the spoken and written English language output of…
The relation between working memory and language comprehension in signers and speakers.
Emmorey, Karen; Giezen, Marcel R; Petrich, Jennifer A F; Spurgeon, Erin; O'Grady Farnady, Lucinda
2017-06-01
This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension. Copyright © 2017 Elsevier B.V. All rights reserved.
Experiments on Urdu Text Recognition
NASA Astrophysics Data System (ADS)
Mukhtar, Omar; Setlur, Srirangaraj; Govindaraju, Venu
Urdu is a language spoken in the Indian subcontinent by an estimated 130-270 million speakers. At the spoken level, Urdu and Hindi are considered dialects of a single language because of shared vocabulary and the similarity in grammar. At the written level, however, Urdu is much closer to Arabic because it is written in Nastaliq, the calligraphic style of the Persian-Arabic script. Therefore, a speaker of Hindi can understand spoken Urdu but may not be able to read written Urdu because Hindi is written in Devanagari script, whereas an Arabic writer can read the written words but may not understand the spoken Urdu. In this chapter we present an overview of written Urdu. Prior research in handwritten Urdu OCR is very limited. We present (perhaps) the first system for recognizing handwritten Urdu words. On a data set of about 1300 handwritten words, we achieved an accuracy of 70% for the top choice, and 82% for the top three choices.
Talking with Young Children: How Teachers Encourage Learning
ERIC Educational Resources Information Center
Test, Joan E.; Cunningham, Denise D.; Lee, Amanda C.
2010-01-01
In general, talking with young children encourages development in many areas: (1) spoken language; (2) early literacy; (3) cognitive development; (4) social skills; and (5) emotional maturity. Speaking with children in increasingly complex and responsive ways does this even better. This article explores research findings about the effects of…
The comprehension skills of children learning English as an additional language.
Burgoyne, K; Kelly, J M; Whiteley, H E; Spooner, A
2009-12-01
Data from national test results suggests that children who are learning English as an additional language (EAL) experience relatively lower levels of educational attainment in comparison to their monolingual, English-speaking peers. The relative underachievement of children who are learning EAL demands that the literacy needs of this group are identified. To this end, this study aimed to explore the reading- and comprehension-related skills of a group of EAL learners. Data are reported from 92 Year 3 pupils, of whom 46 children are learning EAL. Children completed standardized measures of reading accuracy and comprehension, listening comprehension, and receptive and expressive vocabulary. Results indicate that many EAL learners experience difficulties in understanding written and spoken text. These comprehension difficulties are not related to decoding problems but are related to significantly lower levels of vocabulary knowledge experienced by this group. Many EAL learners experience significantly lower levels of English vocabulary knowledge which has a significant impact on their ability to understand written and spoken text. Greater emphasis on language development is therefore needed in the school curriculum to attempt to address the limited language skills of children learning EAL.
Talker familiarity and spoken word recognition in school-age children*
Levi, Susannah V.
2014-01-01
Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers’ voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German–English bilingual talkers and were tested on the speech of six bilinguals, three of whom were familiar. Results revealed that children do show improved spoken language processing when they are familiar with the talkers, but this improvement was limited to highly familiar lexical items. This restriction of the familiar talker advantage is attributed to differences in the representation of highly familiar and less familiar lexical items. In addition, children did not exhibit accent-general learning; despite having been exposed to German-accented talkers during training, there was no improvement for novel German-accented talkers. PMID:25159173
Language planning for the 21st century: revisiting bilingual language policy for deaf children.
Knoors, Harry; Marschark, Marc
2012-01-01
For over 25 years in some countries and more recently in others, bilingual education involving sign language and the written/spoken vernacular has been considered an essential educational intervention for deaf children. With the recent growth in universal newborn hearing screening and technological advances such as digital hearing aids and cochlear implants, however, more deaf children than ever before have the potential for acquiring spoken language. As a result, the question arises as to the role of sign language and bilingual education for deaf children, particularly those who are very young. On the basis of recent research and fully recognizing the historical sensitivity of this issue, we suggest that language planning and language policy should be revisited in an effort to ensure that they are appropriate for the increasingly diverse population of deaf children.
How long-term memory and accentuation interact during spoken language comprehension.
Li, Xiaoqing; Yang, Yufang
2013-04-01
Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words). Copyright © 2013 Elsevier Ltd. All rights reserved.
Humphries, Tom; Kushalnagar, Poorna; Mathur, Gaurav; Napoli, Donna Jo; Padden, Carol; Rathmann, Christian; Smith, Scott R
2012-04-02
Children acquire language without instruction as long as they are regularly and meaningfully engaged with an accessible human language. Today, 80% of children born deaf in the developed world are implanted with cochlear devices that allow some of them access to sound in their early years, which helps them to develop speech. However, because of brain plasticity changes during early childhood, children who have not acquired a first language in the early years might never be completely fluent in any language. If they miss this critical period for exposure to a natural language, their subsequent development of the cognitive activities that rely on a solid first language might be underdeveloped, such as literacy, memory organization, and number manipulation. An alternative to speech-exclusive approaches to language acquisition exists in the use of sign languages such as American Sign Language (ASL), where acquiring a sign language is subject to the same time constraints of spoken language development. Unfortunately, so far, these alternatives are caught up in an "either - or" dilemma, leading to a highly polarized conflict about which system families should choose for their children, with little tolerance for alternatives by either side of the debate and widespread misinformation about the evidence and implications for or against either approach. The success rate with cochlear implants is highly variable. This issue is still debated, and as far as we know, there are no reliable predictors for success with implants. Yet families are often advised not to expose their child to sign language. Here absolute positions based on ideology create pressures for parents that might jeopardize the real developmental needs of deaf children. What we do know is that cochlear implants do not offer accessible language to many deaf children. By the time it is clear that the deaf child is not acquiring spoken language with cochlear devices, it might already be past the critical period, and the child runs the risk of becoming linguistically deprived. Linguistic deprivation constitutes multiple personal harms as well as harms to society (in terms of costs to our medical systems and in loss of potential productive societal participation).
Design and Development of a Nonverbal Program
ERIC Educational Resources Information Center
Thiagarajan, S.
1973-01-01
Problems were encountered in designing an illustrated program on contraceptive techniques for India's rural population where illiteracy is high and hundreds of different languages are spoken. Field trials of a picture program indicated the ability to "read" a picture is an acquired skill. (Author)
The grammatical morpheme deficit in moderate hearing impairment.
McGuckian, Maria; Henry, Alison
2007-03-01
Much remains unknown about grammatical morpheme (GM) acquisition by children with moderate hearing impairment (HI) acquiring spoken English. To investigate how moderate HI impacts on the use of GMs in speech and to provide an explanation for the pattern of findings. Elicited and spontaneous speech data were collected from children with moderate HI (n = 10; mean age = 7;4 years) and a control group of typically developing children (n = 10; mean age = 3;2 years) with equivalent mean length of utterance (MLU). The data were analysed to determine the use of ten GMs of English. Comparisons were made between the groups for rates of correct GM production, for types and rates of GM errors, and for order of GM accuracy. The findings revealed significant differences between the HI group and the control group for correct production of five GMs. The differences were not all in the same direction. The HI group produced possessive -s and plural -s significantly less frequently than the controls (this is not simply explained by the perceptual saliency of -s) and produced progressive -ing, articles and irregular past tense significantly more frequently than the controls. Moreover, the order of GM accuracy for the HI group did not correlate with that observed for the control group. Various factors were analysed in an attempt to explain order of GM accuracy for the HI group (i.e. perceptual saliency, syntactic category, semantics and frequency of GMs in input). Frequency of GMs in input was the most successful explanation for the overall pattern of GM accuracy. Interestingly, the order of GM accuracy for the HI group (acquiring spoken English as a first language) was characteristic of that reported for individuals learning English as a second language. An explanation for the findings is drawn from a factor that connects these different groups of language learners, i.e. limited access to spoken English input. It is argued that, because of hearing factors, the children with HI are below a threshold for intake of spoken language input (a threshold easily reached by the controls). Thus, the children with HI are more input-dependent at the point in development studied and as such are more sensitive to input frequency effects. The findings suggest that optimizing or indeed increasing auditory input of GMs may have a positive impact on GM development for children with moderate HI.
Who's on First? Investigating the referential hierarchy in simple native ASL narratives.
Frederiksen, Anne Therese; Mayberry, Rachel I
2016-09-01
Discussions of reference tracking in spoken languages often invoke some version of a referential hierarchy. In this paper, we asked whether this hierarchy applies equally well to reference tracking in a visual language, American Sign Language, or whether modality differences influence its structure. Expanding the results of previous studies, this study looked at ASL referential devices beyond nouns, pronouns, and zero anaphora. We elicited four simple narratives from eight native ASL signers, and examined how the signers tracked reference throughout their stories. We found that ASL signers follow general principles of the referential hierarchy proposed for spoken languages by using nouns for referent introductions, and zero anaphora for referent maintenance. However, we also found significant differences such as the absence of pronouns in the narratives, despite their existence in ASL, and differential use of verbal and constructed action zero anaphora. Moreover, we found that native signers' use of classifiers varied with discourse status in a way that deviated from our expectations derived from the referential hierarchy for spoken languages. On this basis, we propose a tentative hierarchy of referential expressions for ASL that incorporates modality specific referential devices.
Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.
Brimo, Danielle; Lund, Emily; Sapp, Alysha
2018-05-01
Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below-average reading comprehension, but the syntax construct, awareness or knowledge, did. Thus, when selecting how to measure syntax among school-age children, researchers and practitioners should evaluate whether they are measuring children's awareness of spoken syntax or knowledge of spoken syntax. Other differences, such as participant diagnosis and the format of items on the spoken-syntax assessments, also were discussed as possible explanations for why researchers found that children with average and below-average reading comprehension did not score significantly differently on spoken-syntax assessments. © 2017 Royal College of Speech and Language Therapists.
Priestley, Karen; Enns, Charlotte; Arbuckle, Shauna
2018-01-01
Bimodal-bilingual programs are emerging as one way to meet broader needs and provide expanded language, educational and social-emotional opportunities for students who are deaf and hard of hearing (Marschark, M., Tang, G. & Knoors, H. (Eds). (2014). Bilingualism and bilingual Deaf education. New York, NY: Oxford University Press; Paludneviciene & Harris, R. (2011). Impact of cochlear implants on the deaf community. In Paludneviciene, R. & Leigh, I. (Eds.), Cochlear implants evolving perspectives (pp. 3-19). Washington, DC: Gallaudet University Press). However, there is limited research on students' spoken language development, signed language growth, academic outcomes or the social-emotional factors associated with these programs (Marschark, M., Tang, G. & Knoors, H. (Eds). (2014). Bilingualism and bilingual Deaf education. New York, NY: Oxford University Press; Nussbaum, D & Scott, S. (2011). The cochlear implant education center: Perspectives on effective educational practices. In Paludneviciene, R. & Leigh, I. (Eds.) Cochlear implants evolving perspectives (pp. 175-205). Washington, DC: Gallaudet University Press. The cochlear implant education center: Perspectives on effective educational practices. In Paludnevicience & Leigh (Eds). Cochlear implants evolving perspectives (pp. 175-205). Washington, DC: Gallaudet University Press; Spencer, P. & Marschark, M. (Eds.) (2010). Evidence-based practice in educating deaf and hard-of-hearing students. New York, NY: Oxford University Press). The purpose of this case study was to look at formal and informal student outcomes as well as staff and parent perceptions during the first 3 years of implementing a bimodal-bilingual (ASL and spoken English) program within an ASL milieu at a small school for the deaf. Speech and language assessment results for five students were analyzed over a 3-year period and indicated that the students made significant positive gains in all areas, although results were variable. Staff and parent survey responses indicated primarily positive perceptions of the program. Some staff identified ongoing challenges with balancing signed and spoken language use. Many parents responded with strong emotions, some stating that the program was "life-changing" for their children/families. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Developing Conceptual Understanding of Sarcasm in L2 English through Explicit Instruction
ERIC Educational Resources Information Center
Kim, Jiyun; Lantolf, James P.
2018-01-01
This article reports on a pedagogical project aimed at helping second language (L2) learners of English develop the ability to detect and appropriately interpret spoken sarcasm. The study used a pre- and posttest procedure to assess the development of learners' ability to both detect sarcasm and impute appropriate speaker intentions and attitudes…
Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
2017-01-01
Abstract Background Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language delays. Aims We compared deaf and hearing children's performance on a semantic fluency task. Optimal performance on this task requires a systematic search of the mental lexicon, the retrieval of words within a subcategory and, when that subcategory is exhausted, switching to a new subcategory. We compared retrieval patterns between groups, and also compared the responses of deaf children who used British Sign Language (BSL) with those who used spoken English. We investigated how semantic fluency performance related to children's expressive vocabulary and executive function skills, and also retested semantic fluency in the majority of the children nearly 2 years later, in order to investigate how much progress they had made in that time. Methods & Procedures Participants were deaf children aged 6–11 years (N = 106, comprising 69 users of spoken English, 29 users of BSL and eight users of Sign Supported English—SSE) compared with hearing children (N = 120) of the same age who used spoken English. Semantic fluency was tested for the category ‘animals’. We coded for errors, clusters (e.g., ‘pets’, ‘farm animals’) and switches. Participants also completed the Expressive One‐Word Picture Vocabulary Test and a battery of six non‐verbal executive function tasks. In addition, we collected follow‐up semantic fluency data for 70 deaf and 74 hearing children, nearly 2 years after they were first tested. Outcomes & Results Deaf children, whether using spoken or signed language, produced fewer items in the semantic fluency task than hearing children, but they showed similar patterns of responses for items most commonly produced, clustering of items into subcategories and switching between subcategories. Both vocabulary and executive function scores predicted the number of correct items produced. Follow‐up data from deaf participants showed continuing delays relative to hearing children 2 years later. Conclusions & Implications We conclude that semantic fluency can be used experimentally to investigate lexical organization in deaf children, and that it potentially has clinical utility across the heterogeneous deaf population. We present normative data to aid clinicians who wish to use this task with deaf children. PMID:28691260
Le langage des gestes (Body Language).
ERIC Educational Resources Information Center
Brunet, Jean-Paul
1985-01-01
Body language is inseparable from spoken language, and may reflect universal behavior or be culture-specific. Photographs and videotape recordings can help the French instructor illustrate the richness of facial and body mannerisms. (MSE)
Vocal Development as a Guide to Modeling the Evolution of Language.
Oller, D Kimbrough; Griebel, Ulrike; Warlaumont, Anne S
2016-04-01
Modeling of evolution and development of language has principally utilized mature units of spoken language, phonemes and words, as both targets and inputs. This approach cannot address the earliest phases of development because young infants are unable to produce such language features. We argue that units of early vocal development-protophones and their primitive illocutionary/perlocutionary forces-should be targeted in evolutionary modeling because they suggest likely units of hominin vocalization/communication shortly after the split from the chimpanzee/bonobo lineage, and because early development of spontaneous vocal capability is a logically necessary step toward vocal language, a root capability without which other crucial steps toward vocal language capability are impossible. Modeling of language evolution/development must account for dynamic change in early communicative units of form/function across time. We argue for interactive contributions of sender/infants and receiver/caregivers in a feedback loop involving both development and evolution and propose to begin computational modeling at the hominin break from the primate communicative background. Copyright © 2016 Cognitive Science Society, Inc.
Music and Early Language Acquisition
Brandt, Anthony; Gebrian, Molly; Slevc, L. Robert
2012-01-01
Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability – one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development. PMID:22973254
Music and early language acquisition.
Brandt, Anthony; Gebrian, Molly; Slevc, L Robert
2012-01-01
Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability - one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development.
Early Hearing Detection and Intervention in Developing Countries: Current Status and Prospects
ERIC Educational Resources Information Center
Olusanya, Bolajoko O.
2006-01-01
Infant hearing screening is emerging rapidly as a silent global revolution for the early detection of children with congenital or early onset hearing loss to ensure timely enrollment in family-oriented intervention programs for the development of spoken language. This article examines the overriding and interrelated scientific, ethical and…
Same Talker, Different Language: A Replication.
ERIC Educational Resources Information Center
Stockmal, Verna; Bond, Z. S.
This research investigated judgments of language samples produced by bilingual speakers. In the first study, listeners judged whether two language samples produced by bilingual speakers were spoken in the same language or in two different languages. Four bilingual African talkers recorded short passages in Swahili and in their home language (Akan,…
Nayak, Satheesha B; Awal, Mahfuzah Binti; Han, Chang Wei; Sivaram, Ganeshram; Vigneswaran, Thimesha; Choon, Tee Lian
2016-01-01
Introduction Tongue is mainly used for taste, chewing and in speech. In the present study, we focused on the secondary function of the tongue as to how it is used in phonetic pronunciation and linguistics and how these factors affect tongue movements. Objective To compare all possible movements of tongue among Malaysians belonging to three ethnic races and to find out if there is any link between languages spoken and ability to perform various tongue movements. Materials and Methods A total of 450 undergraduate medical students participated in the study. The students were chosen from three different races i.e. Malays, Chinese and Indians (Malaysian Indians). Data was collected from the students through a semi-structured interview following which each student was asked to demonstrate various tongue movements like protrusion, retraction, flattening, rolling, twisting, folding or any other special movements. The data obtained was first segregated and analysed according to gender, race and types and dialects of languages spoken. Results We found that most of the Malaysians were able to perform the basic movements of tongue like protrusion, flattening movements and very few were able to perform twisting and folding of the tongue. The ability to perform normal tongue movements and special movements like folding, twisting, rolling and others was higher among Indians when compared to Malay and Chinese. Conclusion Languages spoken by Indians involve detailed tongue rolling and folding in pronouncing certain words and may be the reason as to why Indians are more versatile with tongue movements as compared to the other two races amongst Malaysians. It may be a possibility that languages spoken by a person serves as a variable that increases their ability to perform special tongue movements besides influenced by the genetic makeup of a person. PMID:26894051
ERIC Educational Resources Information Center
Starkman, Neal
2008-01-01
Online resources and educator networks are providing teachers of English language learners with a support system they do not often get within their own school districts. Catherine Collier's Cross Cultural Developmental Education Services, based in Ferndale, WA., has been providing professional development and teaching materials to ELL teachers.…
Advances in natural language processing.
Hirschberg, Julia; Manning, Christopher D
2015-07-17
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.
Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems
NASA Astrophysics Data System (ADS)
Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae
Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
THE PARADOX OF SIGN LANGUAGE MORPHOLOGY
Aronoff, Mark; Meir, Irit; Sandler, Wendy
2011-01-01
Sign languages have two strikingly different kinds of morphological structure: sequential and simultaneous. The simultaneous morphology of two unrelated sign languages, American and Israeli Sign Language, is very similar and is largely inflectional, while what little sequential morphology we have found differs significantly and is derivational. We show that at least two pervasive types of inflectional morphology, verb agreement and classifier constructions, are iconically grounded in spatiotemporal cognition, while the sequential patterns can be traced to normal historical development. We attribute the paucity of sequential morphology in sign languages to their youth. This research both brings sign languages much closer to spoken languages in their morphological structure and shows how the medium of communication contributes to the structure of languages.* PMID:22223926
Clare Allen, M; Kendrick, Andrew; Archbold, Sue; Harrigan, Suzanne
2014-05-01
The Leaping on with Language programme provides a combination of strategies and activities to accelerate children's spoken language use from simple sentences to complex language. Using a conversational philosophy it expands the building blocks of language (vocabulary, grammar, speech), whilst emphasising the importance of developing independent social communication and acknowledging a child's developing self esteem and self identity between the ages of 4-11. Three pilot projects evaluated the programme with a total of 51 delegates. The outcomes were hugely positive. Changes in behaviour were reported from the 3rd pilot 1 month later. Comments regarding the length of training, practical strategies and more film clips were implemented. Leaping on with language is now a free to access resource available on line.
Hunter Adams, Jo; Penrose, Katherine L.; Cochran, Jennifer; Rybin, Denis; Doros, Gheorghe; Henshaw, Michelle; Paasche-Orlow, Michael
2013-01-01
Background This study investigated the impact of English health literacy and spoken proficiency and acculturation on preventive dental care use among Somali refugees in Massachusetts. Methods 439 adult Somalis in the U.S. ≤ 10 years ago were interviewed. English functional health literacy, dental word recognition, and spoken proficiency were measured using STOFHLA, REALD, and BEST Plus. Logistic regression tested associations of language measures with preventive dental care use. Results Without controlling for acculturation, participants with higher health literacy were 2.0 times more likely to have had preventive care (p=0.02). Subjects with higher word recognition were 1.8 times as likely to have had preventive care (p=0.04). Controlling for acculturation, these were no longer significant, and spoken proficiency was not associated with increased preventive care use. Discussion English health literacy and spoken proficiency were not associated with preventive dental care. Other factors, like acculturation, were more predictive of care use than language skills. PMID:23748902
Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials.
Freunberger, Dominik; Nieuwland, Mante S
2016-09-01
Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences ("Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day"). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Saving a Language with Computers, Tape Recorders, and Radio.
ERIC Educational Resources Information Center
Bennett, Ruth
This paper discusses the use of technology in instruction. It begins by examining research on technology and indigenous languages, focusing on the use of technology to get community attention for an indigenous language, improve the quantity of quality language, document spoken language, create sociocultural learning contexts, improve study skills,…
Signs of Change: Contemporary Attitudes to Australian Sign Language
ERIC Educational Resources Information Center
Slegers, Claudia
2010-01-01
This study explores contemporary attitudes to Australian Sign Language (Auslan). Since at least the 1960s, sign languages have been accepted by linguists as natural languages with all of the key ingredients common to spoken languages. However, these visual-spatial languages have historically been subject to ignorance and myth in Australia and…
Micro Language Planning and Cultural Renaissance in Botswana
ERIC Educational Resources Information Center
Alimi, Modupe M.
2016-01-01
Many African countries exhibit complex patterns of language use because of linguistic pluralism. The situation is often compounded by the presence of at least one foreign language that is either the official or second language. The language situation in Botswana depicts this complex pattern. Out of the 26 languages spoken in the country, including…
Semiotic diversity in utterance production and the concept of ‘language’
Kendon, Adam
2014-01-01
Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics. PMID:25092661
Dynamic action units slip in speech production errors ☆
Goldstein, Louis; Pouplier, Marianne; Chen, Larissa; Saltzman, Elliot; Byrd, Dani
2008-01-01
In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors – “slips of the tongue”. The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units – gestures – in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action. PMID:16822494
Willis, Suzi; Goldbart, Juliet; Stansfield, Jois
2014-07-01
To compare verbal short-term memory and visual working memory abilities of six children with congenital hearing-impairment identified as having significant language learning difficulties with normative data from typically hearing children using standardized memory assessments. Six children with hearing loss aged 8-15 years were assessed on measures of verbal short-term memory (Non-word and word recall) and visual working memory annually over a two year period. All children had cognitive abilities within normal limits and used spoken language as the primary mode of communication. The language assessment scores at the beginning of the study revealed that all six participants exhibited delays of two years or more on standardized assessments of receptive and expressive vocabulary and spoken language. The children with hearing-impairment scores were significantly higher on the non-word recall task than the "real" word recall task. They also exhibited significantly higher scores on visual working memory than those of the age-matched sample from the standardized memory assessment. Each of the six participants in this study displayed the same pattern of strengths and weaknesses in verbal short-term memory and visual working memory despite their very different chronological ages. The children's poor ability to recall single syllable words in relation to non-words is a clinical indicator of their difficulties in verbal short-term memory. However, the children with hearing-impairment do not display generalized processing difficulties and indeed demonstrate strengths in visual working memory. The poor ability to recall words, in combination with difficulties with early word learning may be indicators of children with hearing-impairment who will struggle to develop spoken language equal to that of their normally hearing peers. This early identification has the potential to allow for target specific intervention that may remediate their difficulties. Copyright © 2014. Published by Elsevier Ireland Ltd.
Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo
2016-01-01
The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.
ERIC Educational Resources Information Center
Gangji, Nazneen; Pascoe, Michelle; Smouse, Mantoa
2015-01-01
Background: Swahili is widely spoken in East Africa, but to date there are no culturally and linguistically appropriate materials available for speech-language therapists working in the region. The challenges are further exacerbated by the limited research available on the typical acquisition of Swahili phonology. Aim: To describe the speech…
Towards Identifying Dyslexia in Standard Indonesian: The Development of a Reading Assessment Battery
ERIC Educational Resources Information Center
Jap, Bernard A. J.; Borleffs, Elisabeth; Maassen, Ben A. M.
2017-01-01
With its transparent orthography, Standard Indonesian is spoken by over 160 million inhabitants and is the primary language of instruction in education and the government in Indonesia. An assessment battery of reading and reading-related skills was developed as a starting point for the diagnosis of dyslexia in beginner learners. Founded on the…
The Development of Conjunction Use in Advanced L2 Speech
ERIC Educational Resources Information Center
Jaroszek, Marcin
2011-01-01
The article discusses the results of a longitudinal study of how the use of conjunctions, as an aspect of spoken discourse competence of 13 selected advanced students of English, developed throughout their 3-year English as a foreign language (EFL) tertiary education. The analysis was carried out in relation to a number of variables, including 2…
Development of a Test of Spoken Dutch for Prospective Immigrants
ERIC Educational Resources Information Center
De Jong, John H. A. L.; Lennig, Matthew; Kerkhoff, Anne; Poelmans, Petra
2009-01-01
Based on a parliamentary vote with broad support, the Ministry of Justice of the Netherlands in December 2003 commissioned the development of an examination system to test the Dutch oral language skills of foreigners who want to immigrate permanently to the Netherlands for economic or family reasons. This assessment would take place in the country…
Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hagiwara, Hiroko
2011-04-01
Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children's native-language acquisition. In a picture-word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n=80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children's onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi
2017-01-01
Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to the classical left-hemisphere language network.
The Hidden Meaning of Inner Speech.
ERIC Educational Resources Information Center
Pomper, Marlene M.
This paper is concerned with the inner speech process, its relationship to thought and behavior, and its theoretical and educational implications. The paper first defines inner speech as a bridge between thought and written or spoken language and traces its development. Second, it investigates competing theories surrounding the subject with an…
Orthography Influences the Perception and Production of Speech
ERIC Educational Resources Information Center
Rastle, Kathleen; McCormick, Samantha F.; Bayliss, Linda; Davis, Colin J.
2011-01-01
One intriguing question in language research concerns the extent to which orthographic information impacts on spoken word processing. Previous research has faced a number of methodological difficulties and has not reached a definitive conclusion. Our research addresses these difficulties by capitalizing on recent developments in the area of word…
A Study of the Linguistic Features of Cajun English.
ERIC Educational Resources Information Center
Cox, Juanita
The study contrasts Acadian English (Cajun) spoken in Louisiana with the local standard English, describing the linguistic features (pronunciation, grammar, vocabulary) of the dialect in non-technical language. The objective is to inform elementary and secondary school teachers and others involved in education and curriculum development for a…
ERIC Educational Resources Information Center
SWIFT, LLOYD B.; AND OTHERS
A TEXT IS PRESENTED FOR KITUBA, A TRADE LANGUAGE SPOKEN ALONG THE LOWER CONGO RIVER AND ITS TRIBUTARIES. THE COURSE CONSISTS OF A PRIMER AND A FIVE SUBJECT-ORIENTED GROUP OF LESSONS. THE PRIMER INTRODUCES MAJOR GRAMMATICAL STRUCTURES, DEVELOPS ADEQUATE PRONUNCIATION, AND PRESENTS USEFUL VOCABULARY FOR A VARIETY OF SITUATIONS. THE LESSON GROUPS…
ERIC Educational Resources Information Center
Glaus, Marlene
The activities presented in this book, designed to help children translate their thoughts into spoken and written words, can supplement an elementary teacher's own language arts lessons. Objectives for each activity are listed, with the general focus of the many oral activities being to develop a rich verbal background for future written work. The…
Why Dose Frequency Affects Spoken Vocabulary in Preschoolers with Down Syndrome
ERIC Educational Resources Information Center
Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth
2015-01-01
In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…
"Jaja" in Spoken German: Managing Knowledge Expectations
ERIC Educational Resources Information Center
Taleghani-Nikazm, Carmen; Golato, Andrea
2016-01-01
In line with the other contributions to this issue on teaching pragmatics, this paper provides teachers of German with a two-day lesson plan for integrating authentic spoken language and its associated cultural background into their teaching. Specifically, the paper discusses how "jaja" and its phonetic variants are systematically used…
Individual Differences in Inhibitory Control Relate to Bilingual Spoken Word Processing
ERIC Educational Resources Information Center
Mercier, Julie; Pivneva, Irina; Titone, Debra
2014-01-01
We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., "field") and looked at pictures corresponding to the target, a within-language competitor…
ERIC Educational Resources Information Center
Hansen, Lynne
2011-01-01
Recent years have brought increasing attention to studies of language acquisition in a country where the language is spoken, as opposed to formal language study in classrooms. Research on language learners in immersion contexts is important, as the question of whether study abroad is valuable is still somewhat controversial among researchers…
ERIC Educational Resources Information Center
Phillippe, Denise E.
2012-01-01
At Concordia Language Villages, language and culture are inextricably intertwined, as they are in life. Participants "live" and "do" language and culture 16 hours per day. The experiential, residential setting immerses the participants in the culture of the country or countries where the target language is spoken through food,…
Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language
ERIC Educational Resources Information Center
Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary
2015-01-01
Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…
Alsatian versus Standard German: Regional Language Bilingual Primary Education in Alsace
ERIC Educational Resources Information Center
Harrison, Michelle Anne
2016-01-01
This article examines the current situation of regional language bilingual primary education in Alsace and contends that the regional language presents a special case in the context of France. The language comprises two varieties: Alsatian, which traditionally has been widely spoken, and Standard German, used as the language of reference and…
Language and Literacy: The Case of India.
ERIC Educational Resources Information Center
Sridhar, Kamal K.
Language and literacy issues in India are reviewed in terms of background, steps taken to combat illiteracy, and some problems associated with literacy. The following facts are noted: India has 106 languages spoken by more than 685 million people, there are several minor script systems, a major language has different dialects, a language may use…
English Language Learners. What Works Clearinghouse Topic Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2007
2007-01-01
English language learners are students with a primary language other than English who have a limited range of speaking, reading, writing, and listening skills in English. English language learners also include students identified and determined by their school as having limited English proficiency and a language other than English spoken in the…
Multiple Languages and the School Curriculum: Experiences from Tanzania
ERIC Educational Resources Information Center
Mushi, Selina Lesiaki Prosper
2012-01-01
This is a research report on children's use of multiple languages and the school curriculum. The study explored factors that trigger use of, and fluency in, multiple languages; and how fluency in multiple languages relates to thought processes and school performance. Advantages and disadvantages of using only one of the languages spoken were…
Beyond Languages, beyond Modalities: Transforming the Study of Semiotic Repertoires
ERIC Educational Resources Information Center
Kusters, Annelies; Spotti, Massimiliano; Swanwick, Ruth; Tapio, Elina
2017-01-01
This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key…
The interface between spoken and written language: developmental disorders.
Hulme, Charles; Snowling, Margaret J
2014-01-01
We review current knowledge about reading development and the origins of difficulties in learning to read. We distinguish between the processes involved in learning to decode print, and the processes involved in reading for meaning (reading comprehension). At a cognitive level, difficulties in learning to read appear to be predominantly caused by deficits in underlying oral language skills. The development of decoding skills appears to depend critically upon phonological language skills, and variations in phoneme awareness, letter-sound knowledge and rapid automatized naming each appear to be causally related to problems in learning to read. Reading comprehension difficulties in contrast appear to be critically dependent on a range of oral language comprehension skills (including vocabulary knowledge and grammatical, morphological and pragmatic skills).
Spanish as a Second Language when L1 Is Quechua: Endangered Languages and the SLA Researcher
ERIC Educational Resources Information Center
Kalt, Susan E.
2012-01-01
Spanish is one of the most widely spoken languages in the world. Quechua is the largest indigenous language family to constitute the first language (L1) of second language (L2) Spanish speakers. Despite sheer number of speakers and typologically interesting contrasts, Quechua-Spanish second language acquisition is a nearly untapped research area,…
Bunta, Ferenc; Douglas, Michael; Dickson, Hanna; Cantu, Amy; Wickesberg, Jennifer; Gifford, René H
2016-07-01
There is a critical need to understand better speech and language development in bilingual children learning two spoken languages who use cochlear implants (CIs) and hearing aids (HAs). The paucity of knowledge in this area poses a significant barrier to providing maximal communicative outcomes to a growing number of children who have a hearing loss (HL) and are learning multiple spoken languages. In fact, the number of bilingual individuals receiving CIs and HAs is rapidly increasing, and Hispanic children display a higher prevalence of HL than the general population of the United States. In order to serve better bilingual children with CIs and HAs, appropriate and effective therapy approaches need to be designed and tested, based on research findings. This study investigated the effects of supporting both the home language (Spanish) and the language of the majority culture (English) on language outcomes in bilingual children with HL who use CIs and HAs as compared to their bilingual peers who receive English-only support. Retrospective analyses of language measures were completed for two groups of Spanish- and English-speaking bilingual children with HL who use CIs and HAs matched on a range of demographic and socio-economic variables: those with dual-language support versus their peers with English-only support. Dependent variables included scores from the English version of the Preschool Language Scales, 4th Edition. Bilingual children who received dual-language support outperformed their peers who received English-only support at statistically significant levels as measured by Total Language and Expressive Communication as raw and language age scores. No statistically significant group differences were found on Auditory Comprehension scores. In addition to providing support in English, encouraging home language use and providing treatment support in the first language may help rather than hinder development of both English and the home language in bilingual children with HL who use CIs and HAs. In fact, dual-language support may yield better overall and expressive English language outcomes than English-only support for this population. © 2016 Royal College of Speech and Language Therapists.
Semiotic diversity in utterance production and the concept of 'language'.
Kendon, Adam
2014-09-19
Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Iconic Factors and Language Word Order
ERIC Educational Resources Information Center
Moeser, Shannon Dawn
1975-01-01
College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)
The Bilingual Language Interaction Network for Comprehension of Speech
ERIC Educational Resources Information Center
Shook, Anthony; Marian, Viorica
2013-01-01
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can…
Australian Aboriginal Deaf People and Aboriginal Sign Language
ERIC Educational Resources Information Center
Power, Des
2013-01-01
Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…
Krio Language Manual. Revised Edition.
ERIC Educational Resources Information Center
Peace Corps, Freetown (Sierra Leone).
Instructional materials for Krio, the creole spoken in Sierra Leone, are designed for Peace Corps volunteer language instruction and intended for the use of both students and instructors. Fifty-six units provide practice in language skills, particularly oral, geared to the daily language needs of volunteers. Lessons are designed for audio-lingual…
Teaching Reading through Language. TECHNIQUES.
ERIC Educational Resources Information Center
Jones, Edward V.
1986-01-01
Because reading is first and foremost a language comprehension process focusing on the visual form of spoken language, such teaching strategies as language experience and assisted reading have much to offer beginning readers. These techniques have been slow to become accepted by many adult literacy instructors; however, the two strategies,…
El Espanol como Idioma Universal (Spanish as a Universal Language)
ERIC Educational Resources Information Center
Mijares, Jose
1977-01-01
A proposal to transform Spanish into a universal language because it possesses the prerequisites: it is a living language, spoken in several countries; it is a natural language; and it uses the ordinary alphabet. Details on simplification and standardization are given. (Text is in Spanish.) (AMH)
The Role of Pronunciation in SENCOTEN Language Revitalization
ERIC Educational Resources Information Center
Bird, Sonya; Kell, Sarah
2017-01-01
Most Indigenous language revitalization programs in Canada currently emphasize spoken language. However, virtually no research has been done on the role of pronunciation in the context of language revitalization. This study set out to gain an understanding of attitudes around pronunciation in the SENCOTEN-speaking community, in order to determine…
Cultural Pluralism in Japan: A Sociolinguistic Outline.
ERIC Educational Resources Information Center
Honna, Nobuyuki
1980-01-01
Addressing the common misconception that Japan is a mono-ethnic, mono-cultural, and monolingual society, this article focuses on several areas of sociolinguistic concern. It discusses: (1) the bimodalism of the Japanese deaf population between Japanese Sign Language as native language and Japanese Spoken Language as acquired second language; (2)…
Audience Effects in American Sign Language Interpretation
ERIC Educational Resources Information Center
Weisenberg, Julia
2009-01-01
There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…
Language-in-Education Policies in the Catalan Language Area
ERIC Educational Resources Information Center
Vila i Moreno, F. Xavier
2008-01-01
The territories where Catalan is traditionally spoken as a native language constitute an attractive sociolinguistic laboratory which appears especially interesting from the point of view of language-in-education policies. The educational system has spearheaded the recovery of Catalan during the last 20 years. Schools are being attributed most of…
The Abakua Secret Society in Cuba: Language and Culture.
ERIC Educational Resources Information Center
Cedeno, Rafael A. Nunez
1988-01-01
Reports on attempts to determine whether Cuban Abakua is a pidginized Afro-Spanish, creole, or dead language and concludes that some of this language, spoken by a secret society, has its roots in Efik, a language of the Benue-Congo, and seems to be a simple, ritualistic, structureless argot. (CB)
Language Planning for Venezuela: The Role of English.
ERIC Educational Resources Information Center
Kelsey, Irving; Serrano, Jose
A rationale for teaching foreign languages in Venezuelan schools is discussed. An included sociolinguistic profile of Venezuela indicates that Spanish is the sole language of internal communication needs. Other languages spoken in Venezuela serve primarily a group function among the immigrant and indigenous communities. However, the teaching of…
Bunta, Ferenc; Douglas, Michael; Dickson, Hanna; Cantu, Amy; Wickesberg, Jennifer; Gifford, René H.
2015-01-01
Background There is a critical need to better understand speech and language development in bilingual children learning two spoken languages who use cochlear implants (CIs) and hearing aids (HAs). The paucity of knowledge in this area poses a significant barrier to providing maximal communicative outcomes to a growing number of children who have a hearing loss and are learning multiple spoken languages. In fact, the number of bilingual individuals receiving CIs and HAs is rapidly increasing, and Hispanic children display a higher prevalence of hearing loss than the general population of the United States (e.g., Mehra, Eavey, & Keamy, 2009). In order to better serve bilingual children with CIs and HAs, appropriate and effective therapy approaches need to be designed and tested, based on research findings. Aims This study investigated the effects of supporting both the home language (Spanish) and the language of the majority culture (English) on language outcomes in bilingual children with hearing loss (HL) who use CIs and HAs as compared to their bilingual peers who receive English only support. Methods and Procedures Retrospective analyses of language measures were completed for two groups of Spanish-and English-speaking bilingual children with HL who use CIs and HAs matched on a range of demographic and socio-economic variables: those with dual language support versus their peers with English only support. Dependent variables included scores from the English version of the Preschool Language Scales, 4th edition. Results Bilingual children who received dual language support outperformed their peers who received English only support at statistically significant levels as measured by Total Language and Expressive Communication as raw and language age scores. No statistically significant group differences were found on Auditory Comprehension scores. Conclusions In addition to providing support in English, encouraging home language use and providing treatment support in the first language may help rather than hinder development of both English and the home language in bilingual children with hearing loss who use CIs and HAs. In fact, dual language support may yield better overall and expressive English language outcomes than English only support for this population. PMID:27017913
ERIC Educational Resources Information Center
Oktapoti, Maria; Okalidou, Areti; Kyriafinis, George; Petinou, Kakia; Vital, Victor; Herman, Rosalind
2016-01-01
Objective: There are very few measures of language development in spoken Greek that can be used with young deaf children. This study investigated the use of Cyprus Lexical List (CYLEX), a receptive and expressive vocabulary assessment based on parent report that has recently been adapted to Standard Greek, to measure the vocabulary development of…
Frey, Jennifer R; Kaiser, Ann P; Scherer, Nancy J
2018-02-01
The purpose of this study was to investigate the influences of child speech intelligibility and rate on caregivers' linguistic responses. This study compared the language use of children with cleft palate with or without cleft lip (CP±L) and their caregivers' responses. Descriptive analyses of children's language and caregivers' responses and a multilevel analysis of caregiver responsivity were conducted to determine whether there were differences in children's productive language and caregivers' responses to different types of child utterances. Play-based caregiver-child interactions were video recorded in a clinic setting. Thirty-eight children (19 toddlers with nonsyndromic repaired CP±L and 19 toddlers with typical language development) between 17 and 37 months old and their primary caregivers participated. Child and caregiver measures were obtained from transcribed and coded video recordings and included the rate, total number of words, and number of different words spoken by children and their caregivers, intelligibility of child utterances, and form of caregiver responses. Findings from this study suggest caregivers are highly responsive to toddlers' communication attempts, regardless of the intelligibility of those utterances. However, opportunities to respond were fewer for children with CP±L. Significant differences were observed in children's intelligibility and productive language and in caregivers' use of questions in response to unintelligible utterances of children with and without CP±L. This study provides information about differences in children with CP±L's language use and caregivers' responses to spoken language of toddlers with and without CP±L.
Bimodal Bilinguals Co-activate Both Languages during Spoken Comprehension
Shook, Anthony; Marian, Viorica
2012-01-01
Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals’ and English monolinguals’ eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing items than at phonologically unrelated items, and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension. PMID:22770677
ERIC Educational Resources Information Center
Castro, Paloma; Sercu, Lies; Mendez Garcia, Maria del Carmen
2004-01-01
A recent shift has been noticeable in foreign language education theory. Previously, foreign languages were taught as a linguistic code. This then shifted to teaching that code against the sociocultural background of, primarily, one country in which the foreign language is spoken as a national language. More recently, teaching has reflected on…
Language Shift or Increased Bilingualism in South Africa: Evidence from Census Data
ERIC Educational Resources Information Center
Posel, Dorrit; Zeller, Jochen
2016-01-01
In the post-apartheid era, South Africa has adopted a language policy that gives official status to 11 languages (English, Afrikaans, and nine Bantu languages). However, English has remained the dominant language of business, public office, and education, and some research suggests that English is increasingly being spoken in domestic settings.…
A Grammar of Sierra Popoluca (Soteapanec, a Mixe-Zoquean Language)
ERIC Educational Resources Information Center
de Jong Boudreault, Lynda J.
2009-01-01
This dissertation is a comprehensive description of the grammar of Sierra Popoluca (SP, aka Soteapanec), a Mixe-Zoquean language spoken by approximately 28,000 people in Veracruz, Mexico. This grammar begins with an introduction to the language, its language family, a typological overview of the language, a brief history of my fieldwork, and the…
Uncertainty in the Community Language Classroom: A Response to Michael Clyne.
ERIC Educational Resources Information Center
Stuart-Smith, Jane
1997-01-01
Response to an article on community languages in Australia supports the argument that community language speakers do not have an advantage over non-speakers in the community language classroom, but can be disadvantaged by differences between the language taught in the classroom and that spoken in homes. Examples are drawn from Punjabi instruction…
Bimodal Bilinguals Co-Activate Both Languages during Spoken Comprehension
ERIC Educational Resources Information Center
Shook, Anthony; Marian, Viorica
2012-01-01
Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are…
A Corpus-Based Study on Turkish Spoken Productions of Bilingual Adults
ERIC Educational Resources Information Center
Agçam, Reyhan; Bulut, Adem
2016-01-01
The current study investigated whether monolingual adult speakers of Turkish and bilingual adult speakers of Arabic and Turkish significantly differ regarding their spoken productions in Turkish. Accordingly, two groups of undergraduate students studying Turkish Language and Literature at a state university in Turkey were presented two videos on a…
Effects of Prosody and Position on the Timing of Deictic Gestures
ERIC Educational Resources Information Center
Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M.; Szuminsky, Neil
2013-01-01
Purpose: In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. Method: The…
Monitoring the Performance of Human and Automated Scores for Spoken Responses
ERIC Educational Resources Information Center
Wang, Zhen; Zechner, Klaus; Sun, Yu
2018-01-01
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…
THE STRUCTURE OF AYACUCHO QUECHUA.
ERIC Educational Resources Information Center
PARKER, GARY J.; SOLA, DONALD F.
THIS LINGUISTIC DESCRIPTION OF AYACUCHO QUECHUA IS INTENDED TO BE A FAIRLY COMPLETE ANALYSIS OF THE SPOKEN LANGUAGE. USED WITH THE AUTHORS' SPOKEN AYACUCHO QUECHUA COURSE, IT IS A COMPREHENSIVE REFERENCE WORK FOR THE STUDENT AS WELL AS A CONTRIBUTION TO THE FIELD OF DESCRIPTIVE LINGUISTICS. BECAUSE OF THE HIGH DEGREE OF INFLECTION AND SYNTACTIC…
Teaching English as a "Second Language" in Kenya and the United States: Convergences and Divergences
ERIC Educational Resources Information Center
Roy-Campbell, Zaline M.
2015-01-01
English is spoken in five countries as the native language and in numerous other countries as an official language and the language of instruction. In countries where English is the native language, it is taught to speakers of other languages as an additional language to enable them to participate in all domains of life of that country. In many…
"Weil der schmeckt so gut!" = The Learner as Linguist
ERIC Educational Resources Information Center
Vandergriff, Ilona
2005-01-01
In spoken German "weil" frequently occurs with verb-second (v2) word order instead of prescribed ver-last (vlast). This article provides an overview of the variation and its historical development and offers suggestions and materials for a linguistic unit in the advanced language classroom. Current research demonstrates that the variation is not…
ERIC Educational Resources Information Center
Nihei, Koichi
This paper discusses how to teach listening so that English-as-a-Second-Language students can develop a level of listening ability that is useful in the real world, not just in the classroom. It asserts that if teachers know the processes involved in listening comprehension and some features of spoken English, they can provide students with…
Competencies for Teachers Who Instruct Children with Learning Disabilities. Project I.O.U.
ERIC Educational Resources Information Center
Keegan, William
The report lists competencies for teachers in every day interactions with learning disabled students. Developed by a task force, the competencies are intended to serve as general guidelines. Information is presented on the goal, assessment competencies, and instructional competencies for the following areas: classroom management, spoken language,…
Changing Realities in the Classroom for Hearing-Impaired Children with Cochlear Implant
ERIC Educational Resources Information Center
Vermeulen, Anneke; De Raeve, Leo; Langereis, Margreet; Snik, Ad
2012-01-01
Auditory perception with cochlear implants (CIs) enables the majority of deaf children with normal learning potential to develop (near) age-appropriate spoken language. As a consequence, a large proportion of children now attend mainstream education from an early stage. The acoustical environment in kindergartens and schools, however, might be…
Highlights in the History of Oral Teacher Preparation in America
ERIC Educational Resources Information Center
Marvelli, Alan L.
2010-01-01
The history of oral teacher preparation in America is both significant and diverse. There are numerous individuals and events that shifted and defined the professional practices of individuals who promote the listening and spoken language development of children with hearing loss. This article provides an overview of this rich history and offers a…
Advances Underlying Spoken Language Development: A Century of Building on Bell.
ERIC Educational Resources Information Center
Ling, Daniel
1990-01-01
This article compares Alexander Graham Bell's achievements in the areas of instruction and technology for the hearing impaired with contemporary techniques and devices, many of which stem from his work. Assessment techniques, cochlear implants, tactile devices and visual aids are discussed as well as other alternatives and supplements to residual…
Research and Policy Considerations for English Learner Equity
ERIC Educational Resources Information Center
Robinson-Cimpian, Joseph P.; Thompson, Karen D.; Umansky, Ilana M.
2016-01-01
English learners (ELs), students from a home where a language other than English is spoken and who are in the process of developing English proficiency themselves, represent over 10% of the US student population. Oftentimes education policies and practices create barriers for ELs to achieve access and outcomes that are equitable to those of their…
Guest Comment: Universal Language Requirement.
ERIC Educational Resources Information Center
Sherwood, Bruce Arne
1979-01-01
Explains that reading English among Scientists is almost universal, however, there are enormous problems with spoken English. Advocates the use of Esperanto as a viable alternative, and as a language requirement for graduate work. (GA)
Neural Network Computing and Natural Language Processing.
ERIC Educational Resources Information Center
Borchardt, Frank
1988-01-01
Considers the application of neural network concepts to traditional natural language processing and demonstrates that neural network computing architecture can: (1) learn from actual spoken language; (2) observe rules of pronunciation; and (3) reproduce sounds from the patterns derived by its own processes. (Author/CB)
An Introduction to Spoken Setswana.
ERIC Educational Resources Information Center
Mistry, Karen S.
A guide to instruction in Setswana, the most widely dispersed Bantu language in Southern Africa, includes general material about the language, materials for the teacher, 163 lessons, vocabulary lists, and supplementary materials and exercises. Introductory material about the language discusses its distribution and characteristics, and orthography.…
Where Should We Look for Language?
ERIC Educational Resources Information Center
Stokoe, William C.
1986-01-01
Argues that the beginnings of language need to be sought not in the universal abstract grammar proposed by Chomsky but in the evolution of the everyday interaction of the human species. Studies indicate that there is no great gulf between spoken language and nonverbal communication. (SED)
ERIC Educational Resources Information Center
Hyslop, Gwendolyn
2011-01-01
Kurtop is a Tibeto-Burman language spoken by approximately 15,000 people in Northeastern Bhutan. This dissertation is the first descriptive grammar of the language, based on extensive fieldwork and community-driven language documentation in Bhutan. When possible, analyses are presented in typological and historical/comparative perspectives and…
Berber Dialects. Materials Status Report.
ERIC Educational Resources Information Center
Center for Applied Linguistics, Washington, DC. Language/Area Reference Center.
The materials status report for the Berber languages, minority languages spoken in northern Africa, is one of a series intended to provide the nonspecialist with a picture of the availability and quality of texts for teaching various languages to English speakers. The report consists of: (1) a brief narrative description of the Berber language,…
The Unified Phonetic Transcription for Teaching and Learning Chinese Languages
ERIC Educational Resources Information Center
Shieh, Jiann-Cherng
2011-01-01
In order to preserve distinctive cultures, people anxiously figure out writing systems of their languages as recording tools. Mandarin, Taiwanese and Hakka languages are three major and the most popular dialects of Han languages spoken in Chinese society. Their writing systems are all in Han characters. Various and independent phonetic…
Dilemmatic Aspects of Language Policies in a Trilingual Preschool Group
ERIC Educational Resources Information Center
Puskás, Tünde; Björk-Willén, Polly
2017-01-01
This article explores dilemmatic aspects of language policies in a preschool group in which three languages (Swedish, Romani and Arabic) are spoken on an everyday basis. The article highlights the interplay between policy decisions on the societal level, the teachers' interpretations of these policies, as well as language practices on the micro…
Making a Difference: Language Teaching for Intercultural and International Dialogue
ERIC Educational Resources Information Center
Byram, Michael; Wagner, Manuela
2018-01-01
Language teaching has long been associated with teaching in a country or countries where a target language is spoken, but this approach is inadequate. In the contemporary world, language teaching has a responsibility to prepare learners for interaction with people of other cultural backgrounds, teaching them skills and attitudes as well as…
Grammatical Processing of Spoken Language in Child and Adult Language Learners
ERIC Educational Resources Information Center
Felser, Claudia; Clahsen, Harald
2009-01-01
This article presents a selective overview of studies that have investigated auditory language processing in children and late second-language (L2) learners using online methods such as event-related potentials (ERPs), eye-movement monitoring, or the cross-modal priming paradigm. Two grammatical phenomena are examined in detail, children's and…
Kornai, András
2013-01-01
Of the approximately 7,000 languages spoken today, some 2,500 are generally considered endangered. Here we argue that this consensus figure vastly underestimates the danger of digital language death, in that less than 5% of all languages can still ascend to the digital realm. We present evidence of a massive die-off caused by the digital divide. PMID:24167559
Politeness Strategies among Native and Romanian Speakers of English
ERIC Educational Resources Information Center
Ambrose, Dominic
1995-01-01
Background: Politeness strategies vary from language to language and within each society. At times the wrong strategies can have disastrous effects. This can occur when languages are used by non-native speakers or when they are used outside of their own home linguistic context. Purpose: This study of spoken language compares the politeness…
Vernacular Literacy in the Touo Language of the Solomon Islands
ERIC Educational Resources Information Center
Dunn, Michael
2005-01-01
The Touo language is a non-Austronesian language spoken on Rendova Island (Western Province, Solomon Islands). First language speakers of Touo are typically multilingual, and are likely to speak other (Austronesian) vernaculars, as well as Solomon Island Pijin and English. There is no institutional support of literacy in Touo: schools function in…
Documenting Indigenous Knowledge and Languages: Research Planning & Protocol.
ERIC Educational Resources Information Center
Leonard, Beth
2001-01-01
The author's experiences of learning her heritage language of Deg Xinag, an Athabascan language spoken in Alaska, serve as a backdrop for discussing issues in learning endangered indigenous languages. When Deg Xinag is taught by linguists, obvious differences between English and Deg Xinag are not articulated, due to the lack of knowledge of…
Kaqchikel Maya Language Analysis Project
ERIC Educational Resources Information Center
Eddy de Pappa, Sarah
2010-01-01
The purpose of this analysis was to study the linguistic features of Kaqchikel, a Mayan language currently spoken in Guatemala and increasingly in the United States, in an effort to better prepare teachers of English as a second language (ESL) or English as a foreign language (EFL) to address the distinct needs of a frequently neglected and…
ERIC Educational Resources Information Center
Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.
2017-01-01
Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…
Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation
ERIC Educational Resources Information Center
Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy
2016-01-01
Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…
The Pursuit of Language Appropriate Care: Remote Simultaneous Medical Interpretation Use
ERIC Educational Resources Information Center
Logan, Debra M.
2010-01-01
Background: The U.S. government mandates nurses to deliver linguistically appropriate care to hospital patients. It is difficult for nurses to implement the language mandates because there are 6,912 active living languages spoken in the world. Language barriers appear to place limited English proficient (LEP) patients at increased risk for harm…
Language Education Policies and Inequality in Africa: Cross-National Empirical Evidence
ERIC Educational Resources Information Center
Coyne, Gary
2015-01-01
This article examines the relationship between inequality and education through the lens of colonial language education policies in African primary and secondary school curricula. The languages of former colonizers almost always occupy important places in society, yet they are not widely spoken as first languages, meaning that most people depend…
Rhythm in language acquisition.
Langus, Alan; Mehler, Jacques; Nespor, Marina
2017-10-01
Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.
Singing can facilitate foreign language learning.
Ludke, Karen M; Ferreira, Fernanda; Overy, Katie
2014-01-01
This study presents the first experimental evidence that singing can facilitate short-term paired-associate phrase learning in an unfamiliar language (Hungarian). Sixty adult participants were randomly assigned to one of three "listen-and-repeat" learning conditions: speaking, rhythmic speaking, or singing. Participants in the singing condition showed superior overall performance on a collection of Hungarian language tests after a 15-min learning period, as compared with participants in the speaking and rhythmic speaking conditions. This superior performance was statistically significant (p < .05) for the two tests that required participants to recall and produce spoken Hungarian phrases. The differences in performance were not explained by potentially influencing factors such as age, gender, mood, phonological working memory ability, or musical ability and training. These results suggest that a "listen-and-sing" learning method can facilitate verbatim memory for spoken foreign language phrases.
MAWRID: A Model of Arabic Word Reading in Development.
Saiegh-Haddad, Elinor
2017-07-01
This article offers a model of Arabic word reading according to which three conspicuous features of the Arabic language and orthography shape the development of word reading in this language: (a) vowelization/vocalization, or the use of diacritical marks to represent short vowels and other features of articulation; (b) morphological structure, namely, the predominance and transparency of derivational morphological structure in the linguistic and orthographic representation of the Arabic word; and (c) diglossia, specifically, the lexical and lexico-phonological distance between the spoken and the standard forms of Arabic words. It is argued that the triangulation of these features governs the acquisition and deployment of reading mechanisms across development. Moreover, the difficulties that readers encounter in their journey from beginning to skilled reading may be better understood if evaluated within these language-specific features of Arabic language and orthography.
Pronunciation difficulty, temporal regularity, and the speech-to-song illusion.
Margulis, Elizabeth H; Simchy-Gross, Rhimmon; Black, Justin L
2015-01-01
The speech-to-song illusion (Deutsch et al., 2011) tracks the perceptual transformation from speech to song across repetitions of a brief spoken utterance. Because it involves no change in the stimulus itself, but a dramatic change in its perceived affiliation to speech or to music, it presents a unique opportunity to comparatively investigate the processing of language and music. In this study, native English-speaking participants were presented with brief spoken utterances that were subsequently repeated ten times. The utterances were drawn either from languages that are relatively difficult for a native English speaker to pronounce, or languages that are relatively easy for a native English speaker to pronounce. Moreover, the repetition could occur at regular or irregular temporal intervals. Participants rated the utterances before and after the repetitions on a 5-point Likert-like scale ranging from "sounds exactly like speech" to "sounds exactly like singing." The difference in ratings before and after was taken as a measure of the strength of the speech-to-song illusion in each case. The speech-to-song illusion occurred regardless of whether the repetitions were spaced at regular temporal intervals or not; however, it occurred more readily if the utterance was spoken in a language difficult for a native English speaker to pronounce. Speech circuitry seemed more liable to capture native and easy-to-pronounce languages, and more reluctant to relinquish them to perceived song across repetitions.
Gesture, sign, and language: The coming of age of sign language and gesture studies.
Goldin-Meadow, Susan; Brentari, Diane
2017-01-01
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
The effect of written text on comprehension of spoken English as a foreign language.
Diao, Yali; Chandler, Paul; Sweller, John
2007-01-01
Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.
[What bimodal bilingual have to say about bilingual developing?
de Quadros, Ronice Müller; Lillo-Martin, Diane; Pichler, Deborah Chen
2013-07-01
The goal of this work is to present what our research with hearing children from Deaf parents, acquiring Brazilian Sign Language (Libras) and Portuguese, and American Sign Language (ASL) and English (Lillo-Martin et. al. 2010) have to say about bilingual development. The data analyzed in this study is part of the database of spontaneous interactions collected longitudinally, alternating contexts of sign and spoken languages. Moreover, there is data from experimental studies with tests in both pairs of languages that is incorporated to the present study. A general view about previous studies related to bimodal bilingual acquisition with hearing children, from "deaf" parents, will be presented. Then, we will show some linguistics aspects of this kind of acquisition found in our study and discuss about bilingual acquisition.
Vocal interaction between children with Down syndrome and their parents.
Thiemann-Bourque, Kathy S; Warren, Steven F; Brady, Nancy; Gilkerson, Jill; Richards, Jeffrey A
2014-08-01
The purpose of this study was to describe differences in parent input and child vocal behaviors of children with Down syndrome (DS) compared with typically developing (TD) children. The goals were to describe the language learning environments at distinctly different ages in early childhood. Nine children with DS and 9 age-matched TD children participated; 4 children in each group were ages 9-11 months, and 5 were between 25 and 54 months. Measures were derived from automated vocal analysis. A digital language processor measured the richness of the child's language environment, including number of adult words, conversational turns, and child vocalizations. Analyses indicated no significant differences in words spoken by parents of younger versus older children with DS and significantly more words spoken by parents of TD children than parents of children with DS. Differences between the DS and TD groups were observed in rates of all vocal behaviors, with no differences noted between the younger versus older children with DS, and the younger TD children did not vocalize significantly more than the younger DS children. Parents of children with DS continue to provide consistent levels of input across the early language learning years; however, child vocal behaviors remain low after the age of 24 months, suggesting the need for additional and alternative intervention approaches.
ERIC Educational Resources Information Center
Slavkov, Nikolay
2017-01-01
This article reports on a survey with 170 school-age children growing up with two or more languages in the Canadian province of Ontario where English is the majority language, French is a minority language, and numerous other minority languages may be spoken by immigrant or Indigenous residents. Within this context the study focuses on minority…
Key Data on Teaching Languages at School in Europe. 2017 Edition. Eurydice Report
ERIC Educational Resources Information Center
Baïdak, Nathalie; Balcon, Marie-Pascale; Motiejunaite, Akvile
2017-01-01
Linguistic diversity is part of Europe's DNA. It embraces not only the official languages of Member States, but also the regional and/or minority languages spoken for centuries on European territory, as well as the languages brought by the various waves of migrants. The coexistence of this variety of languages constitutes an asset, but it is also…
ERIC Educational Resources Information Center
Lagos, Cristián; Espinoza, Marco; Rojas, Darío
2013-01-01
In this paper, we analyse the cultural models (or folk theory of language) that the Mapuche intellectual elite have about Mapudungun, the native language of the Mapuche people still spoken today in Chile as the major minority language. Our theoretical frame is folk linguistics and studies of language ideology, but we have also taken an applied…
ERIC Educational Resources Information Center
Pizer, Ginger; Walters, Keith; Meier, Richard P.
2013-01-01
Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing…
ERIC Educational Resources Information Center
Williams, Joshua T.; Newman, Sharlene D.
2017-01-01
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…
The Relationship Between Second Language Anxiety and International Nursing Students Stress
ERIC Educational Resources Information Center
Khawaja, Nigar G.; Chan, Sabrina; Stein, Georgia
2017-01-01
We examined the relationship between second language anxiety and international nursing student stress after taking into account the demographic, cognitive, and acculturative factors. International nursing students (N = 152) completed an online questionnaire battery. Hierarchical regression analysis revealed that spoken second language anxiety and…
Britain's South Asian Languages.
ERIC Educational Resources Information Center
Mobbs, Michael
This book focuses on the languages spoken by people of South Asian origin living in Britain and is intended to assist individuals in Britain whose work involves them with speakers of these languages. The approach taken is descriptive and practical, offering linguistic, geographic, and historical background information leading to appreciation of…
ERIC Educational Resources Information Center
Nguyen, Tam Thi Minh
2013-01-01
Bih is a Chamic (Austronesian) language spoken by approximately 500 people in the Southern highlands of Vietnam. This dissertation is the first descriptive grammar of the language, based on extensive fieldwork and community-based language documentation in Vietnam and written from a functional/typological perspective. The analysis in this work is…
Targeted Help for Spoken Dialogue Systems: Intelligent Feedback Improves Naive Users' Performance
NASA Technical Reports Server (NTRS)
Hockey, Beth Ann; Lemon, Oliver; Campana, Ellen; Hiatt, Laura; Aist, Gregory; Hieronymous, Jim; Gruenstein, Alexander; Dowding, John
2003-01-01
We present experimental evidence that providing naive users of a spoken dialogue system with immediate help messages related to their out-of-coverage utterances improves their success in using the system. A grammar-based recognizer and a Statistical Language Model (SLM) recognizer are run simultaneously. If the grammar-based recognizer suceeds, the less accurate SLM recognizer hypothesis is not used. When the grammar-based recognizer fails and the SLM recognizer produces a recognition hypothesis, this result is used by the Targeted Help agent to give the user feed-back on what was recognized, a diagnosis of what was problematic about the utterance, and a related in-coverage example. The in-coverage example is intended to encourage alignment between user inputs and the language model of the system. We report on controlled experiments on a spoken dialogue system for command and control of a simulated robotic helicopter.
Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)
NASA Astrophysics Data System (ADS)
Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto
An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.
Investigating L2 Spoken English through the Role Play Learner Corpus
ERIC Educational Resources Information Center
Nava, Andrea; Pedrazzini, Luciana
2011-01-01
We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…
Pair Counting to Improve Grammar and Spoken Fluency
ERIC Educational Resources Information Center
Hanson, Stephanie
2017-01-01
English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…
ERIC Educational Resources Information Center
Handford, Michael; Matous, Petr
2011-01-01
The purpose of this research is to identify and interpret statistically significant lexicogrammatical items that are used in on-site spoken communication in the international construction industry, initially through comparisons with reference corpora of everyday spoken and business language. Several data sources, including audio and video…
Automated Scoring of L2 Spoken English with Random Forests
ERIC Educational Resources Information Center
Kobayashi, Yuichiro; Abe, Mariko
2016-01-01
The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…
Spoken Persuasive Discourse Abilities of Adolescents with Acquired Brain Injury
ERIC Educational Resources Information Center
Moran, Catherine; Kirk, Cecilia; Powell, Emma
2012-01-01
Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…
ERIC Educational Resources Information Center
Berent, Iris
2008-01-01
Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether…
A Comparison between Written and Spoken Narratives in Aphasia
ERIC Educational Resources Information Center
Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena
2009-01-01
The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…
ERIC Educational Resources Information Center
Staats, Susan
2017-01-01
Poetic structures emerge in spoken language when speakers repeat grammatical phrases that were spoken before. They create the potential to amend or comment on previous speech, and to convey meaning through the structure of discourse. This paper considers the ways in which poetic structure analysis contributes to two perspectives on emergent…
“Down the Language Rabbit Hole with Alice”: A Case Study of a Deaf Girl with a Cochlear Implant
Andrews, Jean F.; Dionne, Vickie
2011-01-01
Alice, a deaf girl who was implanted after age three years of age was exposed to four weeks of storybook sessions conducted in American Sign Language (ASL) and speech (English). Two research questions were address: (1) how did she use her sign bimodal/bilingualism, codeswitching, and code mixing during reading activities and (2) what sign bilingual code-switching and code-mixing strategies did she use while attending to stories delivered under two treatments: ASL only and speech only. Retelling scores were collected to determine the type and frequency of her codeswitching/codemixing strategies between both languages after Alice was read to a story in ASL and in spoken English. Qualitative descriptive methods were utilized. Teacher, clinician and student transcripts of the reading and retelling sessions were recorded. Results showed Alice frequently used codeswitching and codeswitching strategies while retelling the stories retold under both treatments. Alice increased in her speech production retellings of the stories under both the ASL storyreading and spoken English-only reading of the story. The ASL storyreading did not decrease Alice's retelling scores in spoken English. Professionals are encouraged to consider the benefits of early sign bimodal/bilingualism to enhance the overall speech, language and reading proficiency of deaf children with cochlear implants. PMID:22135677
Ordin, Mikhail; Polyanskaya, Leona
2015-08-01
The development of speech rhythm in second language (L2) acquisition was investigated. Speech rhythm was defined as durational variability that can be captured by the interval-based rhythm metrics. These metrics were used to examine the differences in durational variability between proficiency levels in L2 English spoken by French and German learners. The results reveal that durational variability increased as L2 acquisition progressed in both groups of learners. This indicates that speech rhythm in L2 English develops from more syllable-timed toward more stress-timed patterns irrespective of whether the native language of the learner is rhythmically similar to or different from the target language. Although both groups showed similar development of speech rhythm in L2 acquisition, there were also differences: German learners achieved a degree of durational variability typical of the target language, while French learners exhibited lower variability than native British speakers, even at an advanced proficiency level.
Lexical access in sign language: a computational model.
Caselli, Naomi K; Cohen-Goldberg, Ariel M
2014-01-01
PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
ERIC Educational Resources Information Center
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
2015-01-01
Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…
The Arizona Home Language Survey: The Identification of Students for ELL Services
ERIC Educational Resources Information Center
Goldenberg, Claude; Rutherford-Quach, Sara
2010-01-01
Assuring that English language learners (ELLs) receive the services to which they have a right requires accurately identifying those students. Virtually all states identify ELLs in a two-step process. First, parents fill out a home language survey. Second, students in whose homes a language other than English is spoken and who therefore might…
How Facebook Can Revitalise Local Languages: Lessons from Bali
ERIC Educational Resources Information Center
Stern, Alissa Joy
2017-01-01
For a language to survive, it must be spoken and passed down to the next generation. But how can we engage teenagers--so crucial for language transmission--to use and value their local tongue when they are bombarded by pressures from outside and from within their society to only speak national and international languages? This paper analyses the…
Standardisation a Considerable Force behind Language Death: A Case of Shona
ERIC Educational Resources Information Center
Mhute, Isaac
2016-01-01
The paper assesses the contribution of standardisation towards language death taking Clement Doke's resolutions on the various Shona dialects as a case study. It is a qualitative analysis of views gathered from speakers of the language situated in various provinces of Zimbabwe, the country in which the language is spoken by around 75% of the…
Mother Tongue versus Arabic: The Post-Independence Eritrean Language Policy Debate
ERIC Educational Resources Information Center
Mohammad, Abdulkader Saleh
2016-01-01
This paper analyses the controversial discourses around the significance of the Arabic language in Eritrea. It challenges the arguments of the government and some scholars, who claim that the Arabic language is alien to Eritrean society. They argue that it was introduced as an official language under British rule and is only spoken by the Rashaida…
First Steps to Endangered Language Documentation: The Kalasha Language, a Case Study
ERIC Educational Resources Information Center
Mela-Athanasopoulou, Elizabeth
2011-01-01
The present paper based on extensive fieldwork D conducted on Kalasha, an endangered language spoken in the three small valleys in Chitral District of Northwestern Pakistan, exposes a spontaneous dialogue-based elicitation of linguistic material used for the description and documentation of the language. After a brief display of the basic typology…
ERIC Educational Resources Information Center
Boehmler, Eileen
1979-01-01
A survey is presented of the Blackfeet language that is used in the Browning area of Montana. The purpose of the survey is to determine the extent to which the language is spoken and passed on at home, and the degree of interest in the language among the young people. The results are presented along with comments where appropriate. Generally, it…
A Grammar of Southern Pomo: An Indigenous Language of California
ERIC Educational Resources Information Center
Walker, Neil Alexander
2013-01-01
Southern Pomo is a moribund indigenous language, one of seven closely related Pomoan languages once spoken in Northern California in the vicinity of the Russian River drainage, Clear Lake, and the adjacent Pacific coast. This work is the first full-length grammar of the language. It is divided into three parts. Part I introduces the sociocultural…
Listening to Accented Speech in a Second Language: First Language and Age of Acquisition Effects
ERIC Educational Resources Information Center
Larraza, Saioa; Samuel, Arthur G.; Oñederra, Miren Lourdes
2016-01-01
Bilingual speakers must acquire the phonemic inventory of 2 languages and need to recognize spoken words cross-linguistically; a demanding job potentially made even more difficult due to dialectal variation, an intrinsic property of speech. The present work examines how bilinguals perceive second language (L2) accented speech and where…
ERIC Educational Resources Information Center
DePalma, Renée
2015-01-01
This study investigates the self-reported experiences of students participating in a Galician language and culture course. Galician, a language historically spoken in northwestern Spain, has been losing ground with respect to Spanish, particularly in urban areas and among the younger generations. The research specifically focuses on informal…
Prestige from the Bottom Up: A Review of Language Planning in Guernsey
ERIC Educational Resources Information Center
Sallabank, Julia
2005-01-01
This paper discusses language planning measures in Guernsey, Channel Islands. The indigenous language is spoken fluently by only 2% of the population, and is at level 7 on Fishman's 8-point scale of endangerment. It has no official status and low social prestige, and language planning has little official support or funding. Political autonomy has…
Community Language Learning and Counseling-Learning. TEAL Occasional Papers, Vol. l, 1977.
ERIC Educational Resources Information Center
Soga, Lillian
Community Language Learning (CLL) is a humanistic approach to learning which emphasizes the learner and learning rather than the teacher and teaching. In some situations where the teacher is not fluent in the various languages spoken by the students, such as in the English as a second language (ESL) classroom, advanced students may serve as…