Sample records for acquiring syllable-timed languages

  1. Rhythmic speech and stuttering reduction in a syllable-timed language.

    PubMed

    Law, Thomas; Packman, Ann; Onslow, Mark; To, Carol K-S; Tong, Michael C-F; Lee, Kathy Y-S

    2018-06-06

    Speaking rhythmically, also known as syllable-timed speech (STS), has been known for centuries to be a fluency-inducing condition for people who stutter. Cantonese is a tonal syllable-timed language and it has been shown that, of all languages, Cantonese is the most rhythmic (Mok, 2009). However, it is not known if STS reduces stuttering in Cantonese as it does in English. This is the first study to investigate the effects of STS on stuttering in a syllable-timed language. Nineteen native Cantonese-speaking adults who stutter were engaged in conversational tasks in Cantonese under two conditions: one in their usual speaking style and one using STS. The speakers' percentage syllables stuttered (%SS) and speech rhythmicity were rated. The rhythmicity ratings were used to estimate the extent to which speakers were using STS in the syllable-timed condition. Results revealed a statistically significant reduction in %SS in the STS condition; however, this reduction was not as large as in previous studies in other languages and the amount of stuttering reduction varied across speakers. The rhythmicity ratings showed that some speakers were perceived to be speaking more rhythmically than others and that the perceived rhythmicity correlated positively with reductions in stuttering. The findings were unexpected, as it was anticipated that speakers of a highly rhythmic language such as Cantonese would find STS easy to use and that the consequent reductions in stuttering would be great, even greater perhaps than in a stress-timed language such as English. The theoretical and clinical implications of the findings are discussed.

  2. The Role of Syllables in Intermediate-Depth Stress-Timed Languages: Masked Priming Evidence in European Portuguese

    ERIC Educational Resources Information Center

    Campos, Ana Duarte; Mendes Oliveira, Helena; Soares, Ana Paula

    2018-01-01

    The role of syllables as a sublexical unit in visual word recognition and reading is well established in deep and shallow syllable-timed languages such as French and Spanish, respectively. However, its role in intermediate stress-timed languages remains unclear. This paper aims to overcome this gap by studying for the first time the role of…

  3. Stochastic Time Models of Syllable Structure

    PubMed Central

    Shaw, Jason A.; Gafos, Adamantios I.

    2015-01-01

    Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153

  4. Syllable timing and pausing: evidence from Cantonese.

    PubMed

    Perry, Conrad; Wong, Richard Kwok-Shing; Matthews, Stephen

    2009-01-01

    We examined the relationship between the acoustic duration of syllables and the silent pauses that follow them in Cantonese. The results showed that at major syntactic junctures, acoustic plus silent pause durations were quite similar for a number of different syllable types whose acoustic durations differed substantially. In addition, it appeared that CV: syllables, which had the longest acoustic duration of all syllable types that were examined, were also the least likely to have silent pauses after them. These results suggest that cross-language differences between the probability that silent pauses are used at major syntactic junctures might potentially be explained by the accuracy at which timing slots can be assigned for syllables, rather than more complex explanations that have been proposed.

  5. The role of syllables in sign language production

    PubMed Central

    Baus, Cristina; Gutiérrez, Eva; Carreiras, Manuel

    2014-01-01

    The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production. PMID:25431562

  6. The role of syllables in sign language production.

    PubMed

    Baus, Cristina; Gutiérrez, Eva; Carreiras, Manuel

    2014-01-01

    The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.

  7. Syllable Structure Universals and Native Language Interference in Second Language Perception and Production: Positional Asymmetry and Perceptual Links to Accentedness

    PubMed Central

    Cheng, Bing; Zhang, Yang

    2015-01-01

    The present study investigated how syllable structure differences between the first Language (L1) and the second language (L2) affect L2 consonant perception and production at syllable-initial and syllable-final positions. The participants were Mandarin-speaking college students who studied English as a second language. Monosyllabic English words were used in the perception test. Production was recorded from each Chinese subject and rated for accentedness by two native speakers of English. Consistent with previous studies, significant positional asymmetry effects were found across speech sound categories in terms of voicing, place of articulation, and manner of articulation. Furthermore, significant correlations between perception and accentedness ratings were found at the syllable onset position but not for the coda. Many exceptions were also found, which could not be solely accounted for by differences in L1–L2 syllabic structures. The results show a strong effect of language experience at the syllable level, which joins force with acoustic, phonetic, and phonemic properties of individual consonants in influencing positional asymmetry in both domains of L2 segmental perception and production. The complexities and exceptions call for further systematic studies on the interactions between syllable structure universals and native language interference with refined theoretical models to specify the links between perception and production in second language acquisition. PMID:26635699

  8. Assessment of rhythmic entrainment at multiple timescales in dyslexia: evidence for disruption to syllable timing.

    PubMed

    Leong, Victoria; Goswami, Usha

    2014-02-01

    Developmental dyslexia is associated with rhythmic difficulties, including impaired perception of beat patterns in music and prosodic stress patterns in speech. Spoken prosodic rhythm is cued by slow (<10 Hz) fluctuations in speech signal amplitude. Impaired neural oscillatory tracking of these slow amplitude modulation (AM) patterns is one plausible source of impaired rhythm tracking in dyslexia. Here, we characterise the temporal profile of the dyslexic rhythm deficit by examining rhythmic entrainment at multiple speech timescales. Adult dyslexic participants completed two experiments aimed at testing the perception and production of speech rhythm. In the perception task, participants tapped along to the beat of 4 metrically-regular nursery rhyme sentences. In the production task, participants produced the same 4 sentences in time to a metronome beat. Rhythmic entrainment was assessed using both traditional rhythmic indices and a novel AM-based measure, which utilised 3 dominant AM timescales in the speech signal each associated with a different phonological grain-sized unit (0.9-2.5 Hz, prosodic stress; 2.5-12 Hz, syllables; 12-40 Hz, phonemes). The AM-based measure revealed atypical rhythmic entrainment by dyslexic participants to syllable patterns in speech, in perception and production. In the perception task, both groups showed equally strong phase-locking to Syllable AM patterns, but dyslexic responses were entrained to a significantly earlier oscillatory phase angle than controls. In the production task, dyslexic utterances showed shorter syllable intervals, and differences in Syllable:Phoneme AM cross-frequency synchronisation. Our data support the view that rhythmic entrainment at slow (∼5 Hz, Syllable) rates is atypical in dyslexia, suggesting that neural mechanisms for syllable perception and production may also be atypical. These syllable timing deficits could contribute to the atypical development of phonological representations for spoken words

  9. Using syllable-timed speech to treat preschool children who stutter: a multiple baseline experiment.

    PubMed

    Trajkovski, Natasha; Andrews, Cheryl; Onslow, Mark; Packman, Ann; O'Brian, Sue; Menzies, Ross

    2009-03-01

    This report presents the results of an experimental investigation of the effects of a syllable-timed speech treatment on three stuttering preschool children. Syllable-timed speech involves speaking with minimal differentiation in linguistic stress across syllables. Three children were studied in a multiple baseline across participants design, with percent syllables stuttered (%SS) as the dependent variable. In the week following the initial clinic visit, each child decreased their beyond-clinic stuttering by 40%, 49% and 32%, respectively. These reductions are only evident in the time series after the introduction of the syllable-timed speech treatment procedure. Participants required a mean of six clinic visits, of approximately 30-60 min in duration, to reach and sustain a beyond-clinic %SS below 1.0. The results suggest that clinical trials of the treatment are warranted. The reader will be able to summarize, discuss and evaluate: (1) The nature, impact and treatment options available for early stuttering. (2) The syllable-timed speech treatment protocol administered. (3) The advantages of syllable-timed speech treatment for early stuttering. (4) The questions that further research needs to answer about the syllable-timed speech treatment.

  10. A Cross-Language Study of Laryngeal-Oral Coordination across Varying Prosodic and Syllable-Structure Conditions

    ERIC Educational Resources Information Center

    Hoole, Philip; Bombien, Lasse

    2017-01-01

    Purpose: The purpose of this study is to use prosodic and syllable-structure variation to probe the underlying representation of laryngeal kinematics in languages traditionally considered to differ in voicing typology (German vs. Dutch and French). Method: Transillumination and videofiberendoscopic filming were used to investigate the devoicing…

  11. Effects of Diet on Early Stage Cortical Perception and Discrimination of Syllables Differing in Voice-Onset Time: A Longitudinal ERP Study in 3 and 6 Month Old Infants

    ERIC Educational Resources Information Center

    Pivik, R. T.; Andres, Aline; Badger, Thomas M.

    2012-01-01

    The influence of diet on cortical processing of syllables was examined at 3 and 6 months in 239 infants who were breastfed or fed milk or soy-based formula. Event-related potentials to syllables differing in voice-onset-time were recorded from placements overlying brain areas specialized for language processing. P1 component amplitude and latency…

  12. Assessing Plural Morphology in Children Acquiring /S/-Leniting Dialects of Spanish

    ERIC Educational Resources Information Center

    Miller, Karen

    2014-01-01

    Purpose: To examine the production of plural morphology in children acquiring a dialect of Spanish with syllable-final /s/ lenition with the goal of comparing how plural marker omissions in the speech of these children compare with plural marker omissions in children with language impairment acquiring other varieties of Spanish. Method: Three…

  13. Timed picture naming in seven languages

    PubMed Central

    BATES, ELIZABETH; D’AMICO, SIMONA; JACOBSEN, THOMAS; SZÉKELY, ANNA; ANDONOVA, ELENA; DEVESCOVI, ANTONELLA; HERRON, DAN; LU, CHING CHING; PECHMANN, THOMAS; PLÉH, CSABA; WICHA, NICOLE; FEDERMEIER, KARA; GERDJIKOVA, IRINI; GUTIERREZ, GABRIEL; HUNG, DAISY; HSU, JEANNE; IYER, GOWRI; KOHNERT, KATHERINE; MEHOTCHEVA, TEODORA; OROZCO-FIGUEROA, ARACELI; TZENG, ANGELA; TZENG, OVID

    2012-01-01

    Timed picture naming was compared in seven languages that vary along dimensions known to affect lexical access. Analyses over items focused on factors that determine cross-language universals and cross-language disparities. With regard to universals, number of alternative names had large effects on reaction time within and across languages after target–name agreement was controlled, suggesting inhibitory effects from lexical competitors. For all the languages, word frequency and goodness of depiction had large effects, but objective picture complexity did not. Effects of word structure variables (length, syllable structure, compounding, and initial frication) varied markedly over languages. Strong cross-language correlations were found in naming latencies, frequency, and length. Other-language frequency effects were observed (e.g., Chinese frequencies predicting Spanish reaction times) even after within-language effects were controlled (e.g., Spanish frequencies predicting Spanish reaction times). These surprising cross-language correlations challenge widely held assumptions about the lexical locus of length and frequency effects, suggesting instead that they may (at least in part) reflect familiarity and accessibility at a conceptual level that is shared over languages. PMID:12921412

  14. Visualizing Syllables: Real-Time Computerized Feedback within a Speech-Language Intervention

    ERIC Educational Resources Information Center

    DeThorne, Laura; Aparicio Betancourt, Mariana; Karahalios, Karrie; Halle, Jim; Bogue, Ellen

    2015-01-01

    Computerized technologies now offer unprecedented opportunities to provide real-time visual feedback to facilitate children's speech-language development. We employed a mixed-method design to examine the effectiveness of two speech-language interventions aimed at facilitating children's multisyllabic productions: one incorporated a novel…

  15. Rise Time Perception and Detection of Syllable Stress in Adults with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Leong, Victoria; Hamalainen, Jarmo; Soltesz, Fruzsina; Goswami, Usha

    2011-01-01

    Introduction: The perception of syllable stress has not been widely studied in developmental dyslexia, despite strong evidence for auditory rhythmic perceptual difficulties. Here we investigate the hypothesis that perception of sound rise time is related to the perception of syllable stress in adults with developmental dyslexia. Methods: A…

  16. Highly Complex Syllable Structure: A Typological Study of Its Phonological Characteristics and Diachronic Development

    ERIC Educational Resources Information Center

    Easterday, Shelece Michelle

    2017-01-01

    The syllable is a natural unit of organization in spoken language. Strong cross-linguistic tendencies in syllable size and shape are often explained in terms of a universal preference for the CV structure, a type which is also privileged in abstract models of the syllable. Syllable patterns such as those found in Itelmen "qsa?txt??"…

  17. Phrase-Final Syllable Lengthening and Intonation in Early Child Speech.

    ERIC Educational Resources Information Center

    Snow, David

    1994-01-01

    To test opposing theories about the relationship between intonation and syllable timing, these boundary features were compared in a longitudinal study of 9 children's speech development between the mean ages of 16 and 25 months. Results suggest that young children acquire the skills that control intonation earlier than they do skills of final…

  18. Automated classification of mouse pup isolation syllables: from cluster analysis to an Excel-based "mouse pup syllable classification calculator".

    PubMed

    Grimsley, Jasmine M S; Gadziola, Marie A; Wenstrup, Jeffrey J

    2012-01-01

    Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified 10 syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.

  19. The Basis of the Syllable Hierarchy: Articulatory Pressures or Universal Phonological Constraints?

    PubMed

    Zhao, Xu; Berent, Iris

    2018-02-01

    Across languages, certain syllable types are systematically preferred to others (e.g., [Formula: see text] lbif, where [Formula: see text] indicates a preference). Previous research has shown that these preferences are active in the brains of individual speakers, they are evident even when none of these syllable types exists in participants' language, and even when the stimuli are presented in print. These results suggest that the syllable hierarchy cannot be reduced to either lexical or auditory/phonetic pressures. Here, we examine whether the syllable hierarchy is due to articulatory pressures. According to the motor embodiment view, the perception of a linguistic stimulus requires simulating its production; dispreferred syllables (e.g., lbif) are universally disliked because their production is harder to simulate. To address this possibility, we assessed syllable preferences while articulation was mechanically suppressed. Our four experiments each found significant effects of suppression. Remarkably, people remained sensitive to the syllable hierarchy regardless of suppression. Specifically, results with auditory materials (Experiments 1-2) showed strong effects of syllable structure irrespective of suppression. Moreover, syllable structure uniquely accounted for listeners' behavior even when controlling for several phonetic characteristics of our auditory materials. Results with printed stimuli (Experiments 3-4) were more complex, as participants in these experiments relied on both phonological and graphemic information. Nonetheless, readers were sensitive to most of the syllable hierarchy (e.g., [Formula: see text]), and these preferences emerged when articulation was suppressed, and even when the statistical properties of our materials were controlled via a regression analysis. Together, these findings indicate that speakers possess broad grammatical preferences that are irreducible to either sensory or motor factors.

  20. Quantitative Investigations in Hungarian Phonotactics and Syllable Structure

    ERIC Educational Resources Information Center

    Grimes, Stephen M.

    2010-01-01

    This dissertation investigates statistical properties of segment collocation and syllable geometry of the Hungarian language. A corpus and dictionary based approach to studying language phonologies is outlined. In order to conduct research on Hungarian, a phonological lexicon was created by compiling existing dictionaries and corpora and using a…

  1. The Basis of the Syllable Hierarchy: Articulatory Pressures or Universal Phonological Constraints?

    ERIC Educational Resources Information Center

    Zhao, Xu; Berent, Iris

    2018-01-01

    Across languages, certain syllable types are systematically preferred to others (e.g., "blif" ? "bnif" ? "bdif" ? "lbif" where ? indicates a preference). Previous research has shown that these preferences are active in the brains of individual speakers, they are evident even when none of these syllable types…

  2. Temporal order processing of syllables in the left parietal lobe.

    PubMed

    Moser, Dana; Baker, Julie M; Sanchez, Carmen E; Rorden, Chris; Fridriksson, Julius

    2009-10-07

    Speech processing requires the temporal parsing of syllable order. Individuals suffering from posterior left hemisphere brain injury often exhibit temporal processing deficits as well as language deficits. Although the right posterior inferior parietal lobe has been implicated in temporal order judgments (TOJs) of visual information, there is limited evidence to support the role of the left inferior parietal lobe (IPL) in processing syllable order. The purpose of this study was to examine whether the left inferior parietal lobe is recruited during temporal order judgments of speech stimuli. Functional magnetic resonance imaging data were collected on 14 normal participants while they completed the following forced-choice tasks: (1) syllable order of multisyllabic pseudowords, (2) syllable identification of single syllables, and (3) gender identification of both multisyllabic and monosyllabic speech stimuli. Results revealed increased neural recruitment in the left inferior parietal lobe when participants made judgments about syllable order compared with both syllable identification and gender identification. These findings suggest that the left inferior parietal lobe plays an important role in processing syllable order and support the hypothesized role of this region as an interface between auditory speech and the articulatory code. Furthermore, a breakdown in this interface may explain some components of the speech deficits observed after posterior damage to the left hemisphere.

  3. Temporal Order Processing of Syllables in the Left Parietal Lobe

    PubMed Central

    Baker, Julie M.; Sanchez, Carmen E.; Rorden, Chris; Fridriksson, Julius

    2009-01-01

    Speech processing requires the temporal parsing of syllable order. Individuals suffering from posterior left hemisphere brain injury often exhibit temporal processing deficits as well as language deficits. Although the right posterior inferior parietal lobe has been implicated in temporal order judgments (TOJs) of visual information, there is limited evidence to support the role of the left inferior parietal lobe (IPL) in processing syllable order. The purpose of this study was to examine whether the left inferior parietal lobe is recruited during temporal order judgments of speech stimuli. Functional magnetic resonance imaging data were collected on 14 normal participants while they completed the following forced-choice tasks: (1) syllable order of multisyllabic pseudowords, (2) syllable identification of single syllables, and (3) gender identification of both multisyllabic and monosyllabic speech stimuli. Results revealed increased neural recruitment in the left inferior parietal lobe when participants made judgments about syllable order compared with both syllable identification and gender identification. These findings suggest that the left inferior parietal lobe plays an important role in processing syllable order and support the hypothesized role of this region as an interface between auditory speech and the articulatory code. Furthermore, a breakdown in this interface may explain some components of the speech deficits observed after posterior damage to the left hemisphere. PMID:19812331

  4. Words, rules, and mechanisms of language acquisition.

    PubMed

    Endress, Ansgar D; Bonatti, Luca L

    2016-01-01

    We review recent artificial language learning studies, especially those following Endress and Bonatti (Endress AD, Bonatti LL. Rapid learning of syllable classes from a perceptually continuous speech stream. Cognition 2007, 105:247-299), suggesting that humans can deploy a variety of learning mechanisms to acquire artificial languages. Several experiments provide evidence for multiple learning mechanisms that can be deployed in fluent speech: one mechanism encodes the positions of syllables within words and can be used to extract generalization, while the other registers co-occurrence statistics of syllables and can be used to break a continuum into its components. We review dissociations between these mechanisms and their potential role in language acquisition. We then turn to recent criticisms of the multiple mechanisms hypothesis and show that they are inconsistent with the available data. Our results suggest that artificial and natural language learning is best understood by dissecting the underlying specialized learning abilities, and that these data provide a rare opportunity to link important language phenomena to basic psychological mechanisms. For further resources related to this article, please visit the WIREs website. © 2015 Wiley Periodicals, Inc.

  5. InfoSyll: A Syllabary Providing Statistical Information on Phonological and Orthographic Syllables

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Mathey, Stephanie

    2010-01-01

    There is now a growing body of evidence in various languages supporting the claim that syllables are functional units of visual word processing. In the perspective of modeling the processing of polysyllabic words and the activation of syllables, current studies investigate syllabic effects with subtle manipulations. We present here a syllabary of…

  6. Tone classification of syllable-segmented Thai speech based on multilayer perception

    NASA Astrophysics Data System (ADS)

    Satravaha, Nuttavudh; Klinkhachorn, Powsiri; Lass, Norman

    2002-05-01

    Thai is a monosyllabic tonal language that uses tone to convey lexical information about the meaning of a syllable. Thus to completely recognize a spoken Thai syllable, a speech recognition system not only has to recognize a base syllable but also must correctly identify a tone. Hence, tone classification of Thai speech is an essential part of a Thai speech recognition system. Thai has five distinctive tones (``mid,'' ``low,'' ``falling,'' ``high,'' and ``rising'') and each tone is represented by a single fundamental frequency (F0) pattern. However, several factors, including tonal coarticulation, stress, intonation, and speaker variability, affect the F0 pattern of a syllable in continuous Thai speech. In this study, an efficient method for tone classification of syllable-segmented Thai speech, which incorporates the effects of tonal coarticulation, stress, and intonation, as well as a method to perform automatic syllable segmentation, were developed. Acoustic parameters were used as the main discriminating parameters. The F0 contour of a segmented syllable was normalized by using a z-score transformation before being presented to a tone classifier. The proposed system was evaluated on 920 test utterances spoken by 8 speakers. A recognition rate of 91.36% was achieved by the proposed system.

  7. Comparison of English Language Rhythm and Kalhori Kurdish Language Rhythm

    ERIC Educational Resources Information Center

    Taghva, Nafiseh; Zadeh, Vahideh Abolhasani

    2016-01-01

    Interval-based method is a method of studying the rhythmic quantitative features of languages. This method use Pairwise Variability Index (PVI) to consider the variability of vocalic duration and inter-vocalic duration of sentences which leads to classification of languages rhythm into stress-timed languages and syllable-timed ones. This study…

  8. Consonant acquisition in the Malay language: a cross-sectional study of preschool aged Malay children.

    PubMed

    Phoon, Hooi San; Abdullah, Anna Christina; Lee, Lay Wah; Murugaiah, Puvaneswary

    2014-05-01

    To date, there has been little research done on phonological acquisition in the Malay language of typically developing Malay-speaking children. This study serves to fill this gap by providing a systematic description of Malay consonant acquisition in a large cohort of preschool-aged children between 4- and 6-years-old. In the study, 326 Malay-dominant speaking children were assessed using a picture naming task that elicited 53 single words containing all the primary consonants in Malay. Two main analyses were conducted to study their consonant acquisition: (1) age of customary and mastery production of consonants; and (2) consonant accuracy. Results revealed that Malay children acquired all the syllable-initial and syllable-final consonants before 4;06-years-old, with the exception of syllable-final /s/, /h/ and /l/ which were acquired after 5;06-years-old. The development of Malay consonants increased gradually from 4- to 6 years old, with female children performing better than male children. The accuracy of consonants based on manner of articulation showed that glides, affricates, nasals, and stops were higher than fricatives and liquids. In general, syllable-initial consonants were more accurate than syllable-final consonants while consonants in monosyllabic and disyllabic words were more accurate than polysyllabic words. These findings will provide significant information for speech-language pathologists for assessing Malay-speaking children and designing treatment objectives that reflect the course of phonological development in Malay.

  9. Assessment of rhythmic entrainment at multiple timescales in dyslexia: Evidence for disruption to syllable timing☆

    PubMed Central

    Leong, Victoria; Goswami, Usha

    2014-01-01

    Developmental dyslexia is associated with rhythmic difficulties, including impaired perception of beat patterns in music and prosodic stress patterns in speech. Spoken prosodic rhythm is cued by slow (<10 Hz) fluctuations in speech signal amplitude. Impaired neural oscillatory tracking of these slow amplitude modulation (AM) patterns is one plausible source of impaired rhythm tracking in dyslexia. Here, we characterise the temporal profile of the dyslexic rhythm deficit by examining rhythmic entrainment at multiple speech timescales. Adult dyslexic participants completed two experiments aimed at testing the perception and production of speech rhythm. In the perception task, participants tapped along to the beat of 4 metrically-regular nursery rhyme sentences. In the production task, participants produced the same 4 sentences in time to a metronome beat. Rhythmic entrainment was assessed using both traditional rhythmic indices and a novel AM-based measure, which utilised 3 dominant AM timescales in the speech signal each associated with a different phonological grain-sized unit (0.9–2.5 Hz, prosodic stress; 2.5–12 Hz, syllables; 12–40 Hz, phonemes). The AM-based measure revealed atypical rhythmic entrainment by dyslexic participants to syllable patterns in speech, in perception and production. In the perception task, both groups showed equally strong phase-locking to Syllable AM patterns, but dyslexic responses were entrained to a significantly earlier oscillatory phase angle than controls. In the production task, dyslexic utterances showed shorter syllable intervals, and differences in Syllable:Phoneme AM cross-frequency synchronisation. Our data support the view that rhythmic entrainment at slow (∼5 Hz, Syllable) rates is atypical in dyslexia, suggesting that neural mechanisms for syllable perception and production may also be atypical. These syllable timing deficits could contribute to the atypical development of phonological representations for

  10. Tip-of-the-tongue states reveal age differences in the syllable frequency effect.

    PubMed

    Farrell, Meagan T; Abrams, Lise

    2011-01-01

    Syllable frequency has been shown to facilitate production in some languages but has yielded inconsistent results in English and has never been examined in older adults. Tip-of-the-tongue (TOT) states represent a unique type of production failure where the phonology of a word is unable to be retrieved, suggesting that the frequency of phonological forms, like syllables, may influence the occurrence of TOT states. In the current study, we investigated the role of first-syllable frequency on TOT incidence and resolution in young (18-26 years of age), young-old (60-74 years of age), and old-old (75-89 years of age) adults. Data from 3 published studies were compiled, where TOTs were elicited by presenting definition-like questions and asking participants to respond with "Know," "Don't Know," or "TOT." Young-old and old-old adults, but not young adults, experienced more TOTs for words beginning with low-frequency first syllables relative to high-frequency first syllables. Furthermore, age differences in TOT incidence occurred only for words with low-frequency first syllables. In contrast, when a prime word with the same first syllable as the target was presented during TOT states, all age groups resolved more TOTs for words beginning with low-frequency syllables. These findings support speech production models that allow for bidirectional activation between conceptual, lexical, and phonological forms of words. Furthermore, the age-specific effects of syllable frequency provide insight into the progression of age-linked changes to phonological processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  11. Effects of syllable structure in aphasic errors: implications for a new model of speech production.

    PubMed

    Romani, Cristina; Galluzzi, Claudia; Bureca, Ivana; Olson, Andrew

    2011-03-01

    Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Statistical Learning of Two Artificial Languages Presented Successively: How Conscious?

    PubMed Central

    Franco, Ana; Cleeremans, Axel; Destrebecqz, Arnaud

    2011-01-01

    Statistical learning is assumed to occur automatically and implicitly, but little is known about the extent to which the representations acquired over training are available to conscious awareness. In this study, we focus on whether the knowledge acquired in a statistical learning situation is available to conscious control. Participants were first exposed to an artificial language presented auditorily. Immediately thereafter, they were exposed to a second artificial language. Both languages were composed of the same corpus of syllables and differed only in the transitional probabilities. We first determined that both languages were equally learnable (Experiment 1) and that participants could learn the two languages and differentiate between them (Experiment 2). Then, in Experiment 3, we used an adaptation of the Process-Dissociation Procedure (Jacoby, 1991) to explore whether participants could consciously manipulate the acquired knowledge. Results suggest that statistical information can be used to parse and differentiate between two different artificial languages, and that the resulting representations are available to conscious control. PMID:21960981

  13. The Effect of the Number of Syllables on Handwriting Production

    ERIC Educational Resources Information Center

    Lambert, Eric; Kandel, Sonia; Fayol, Michel; Esperet, Eric

    2008-01-01

    Four experiments examined whether motor programming in handwriting production can be modulated by the syllable structure of the word to be written. This study manipulated the number of syllables. The items, words and pseudo-words, had 2, 3 or 4 syllables. French adults copied them three times. We measured the latencies between the visual…

  14. Monitoring Syllable Boundaries during Speech Production

    ERIC Educational Resources Information Center

    Jansma, Bernadette M.; Schiller, Niels O.

    2004-01-01

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a self-monitoring experiment, decisions about the syllable affiliation (first or second syllable) of a…

  15. Syllable Structure in Arabic Varieties with a Focus on Superheavy Syllables

    ERIC Educational Resources Information Center

    Bamakhramah, Majdi A.

    2010-01-01

    This thesis has two broad goals. The first is to contribute to the study of Arabic phonology particularly syllable structure and syllabification. This will be achieved through examining phenomena related to syllable structure and syllabic weight such as syllabification, stress assignment, epenthesis, syncope, and sonority in three different…

  16. Measures of native and non-native rhythm in a quantity language.

    PubMed

    Stockmal, Verna; Markus, Dace; Bond, Dzintra

    2005-01-01

    The traditional phonetic classification of language rhythm as stress-timed or syllable-timed is attributed to Pike. Recently, two different proposals have been offered for describing the rhythmic structure of languages from acoustic-phonetic measurements. Ramus has suggested a metric based on the proportion of vocalic intervals and the variability (SD) of consonantal intervals. Grabe has proposed Pairwise Variability Indices (nPVI, rPVI) calculated from the differences in vocalic and consonantal durations between successive syllables. We have calculated both the Ramus and Grabe metrics for Latvian, traditionally considered a syllable rhythm language, and for Latvian as spoken by Russian learners. Native speakers and proficient learners were very similar whereas low-proficiency learners showed high variability on some properties. The metrics did not provide an unambiguous classification of Latvian.

  17. Free Reading: A Powerful Tool for Acquiring a Second Language

    ERIC Educational Resources Information Center

    Priya, J.; Ponniah, R. Joseph

    2013-01-01

    The paper claims that free reading is a crucial ingredient in acquiring a second or foreign language. It contributes to the development of all measures of language competence which include grammar, vocabulary, spelling, syntax, fluency and style. The review supports the claim that readers acquire language subconsciously when they receive…

  18. Contrasting the effects of duration and number of syllables on the perceptual normalization of lexical tones

    NASA Astrophysics Data System (ADS)

    Ciocca, Valter; Francis, Alexander L.; Yau, Teresa S.-K.

    2004-05-01

    In tonal languages, syllabic fundamental frequency (F0) patterns (``lexical tones'') convey lexical meaning. Listeners need to relate such pitch patterns to the pitch range of a speaker (``tone normalization'') to accurately identify lexical tones. This study investigated the amount of tonal information required to perform tone normalization. A target CV syllable, perceived as either a high level, a low level, or a mid level Cantonese tone, was preceded by a four-syllable carrier sentence whose F0 was shifted (1 semitone), or not shifted. Four conditions were obtained by gating one, two, three, or four syllables from the onset of the target. Presentation rate (normal versus fast) was set such that the duration of the one, two, and three syllable conditions (normal carrier) was equal to that of the two, three, and four syllable conditions (fast carrier). Results suggest that tone normalization is largely accomplished within 250 ms or so prior to target onset, independent of the number of syllables; additional tonal information produces a relatively small increase in tone normalization. Implications for models of lexical tone normalization will be discussed. [Work supported by the RGC of the Hong Kong SAR, Project No. HKU 7193/00H.

  19. Orthographic vs. Phonologic Syllables in Handwriting Production

    ERIC Educational Resources Information Center

    Kandel, Sonia; Herault, Lucie; Grosjacques, Geraldine; Lambert, Eric; Fayol, Michel

    2009-01-01

    French children program the words they write syllable by syllable. We examined whether the syllable the children use to segment words is determined phonologically (i.e., is derived from speech production processes) or orthographically. Third, 4th and 5th graders wrote on a digitiser words that were mono-syllables phonologically (e.g.…

  20. The right ear advantage revisited: speech lateralisation in dichotic listening using consonant-vowel and vowel-consonant syllables.

    PubMed

    Sætrevik, Bjørn

    2012-01-01

    The dichotic listening task is typically administered by presenting a consonant-vowel (CV) syllable to each ear and asking the participant to report the syllable heard most clearly. The results tend to show more reports of the right ear syllable than of the left ear syllable, an effect called the right ear advantage (REA). The REA is assumed to be due to the crossing over of auditory fibres and the processing of language stimuli being lateralised to left temporal areas. However, the tendency for most dichotic listening experiments to use only CV syllable stimuli limits the extent to which the conclusions can be generalised to also apply to other speech phonemes. The current study re-examines the REA in dichotic listening by using both CV and vowel-consonant (VC) syllables and combinations thereof. Results showed a replication of the REA response pattern for both CV and VC syllables, thus indicating that the general assumption of left-side localisation of processing can be applied for both types of stimuli. Further, on trials where a CV is presented in one ear and a VC is presented in the other ear, the CV is selected more often than the VC, indicating that these phonemes have an acoustic or processing advantage.

  1. The relative roles of cultural drift and acoustic adaptation in shaping syllable repertoires of island bird populations change with time since colonization.

    PubMed

    Potvin, Dominique A; Clegg, Sonya M

    2015-02-01

    In birds, song divergence often precedes and facilitates divergence of other traits. We assessed the relative roles of cultural drift, innovation, and acoustic adaptation in divergence of island bird dialects, using silvereyes (Zosterops lateralis). In recently colonized populations, syllable diversity was not significantly lower than source populations, shared syllables between populations decreased with increasing number of founder events, and dialect variation displayed contributions from both habitat features and drift. The breadth of multivariate space occupied by recently colonized Z. l. lateralis populations was comparable to evolutionarily old forms that have diverged over thousands to hundreds of thousands of years. In evolutionarily old subspecies, syllable diversity was comparable to the mainland and the amount of variation in syllable composition explained by habitat features increased by two- to threefold compared to recently colonized populations. Together these results suggest that cultural drift influences syllable repertoires in recently colonized populations, but innovation likely counters syllable loss from colonization. In evolutionarily older populations, the influence of acoustic adaptation increases, possibly favoring a high diversity of syllables. These results suggest that the relative importance of cultural drift and acoustic adaptation changes with time since colonization in island bird populations, highlighting the value of considering multiple mechanisms and timescale of divergence when investigating island song divergence. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  2. Impaired Perception of Syllable Stress in Children with Dyslexia: A Longitudinal Study

    ERIC Educational Resources Information Center

    Goswami, Usha; Mead, Natasha; Fosker, Tim; Huss, Martina; Barnes, Lisa; Leong, Victoria

    2013-01-01

    Prosodic patterning is a key structural element of spoken language. However, the potential role of prosodic awareness in the phonological difficulties that characterise children with developmental dyslexia has been little studied. Here we report the first longitudinal study of sensitivity to syllable stress in children with dyslexia, enabling the…

  3. Topography of Syllable Change-Detection Electrophysiological Indices in Children and Adults with Reading Disabilities

    ERIC Educational Resources Information Center

    Hommet, Caroline; Vidal, Julie; Roux, Sylvie; Blanc, Romuald; Barthez, Marie Anne; De Becque, Brigitte; Barthelemy, Catherine; Bruneau, Nicole; Gomot, Marie

    2009-01-01

    Introduction: Developmental dyslexia (DD) is a frequent language-based learning disorder. The predominant etiological view postulates that reading problems originate from a phonological impairment. Method: We studied mismatch negativity (MMN) and Late Discriminative Negativity (LDN) to syllables change in both children (n = 12; 8-12 years) and…

  4. Paired variability indices in assessing speech rhythm in Spanish/English bilingual language acquisition

    NASA Astrophysics Data System (ADS)

    Work, Richard; Andruski, Jean; Casielles, Eugenia; Kim, Sahyang; Nathan, Geoff

    2005-04-01

    Traditionally, English is classified as a stress-timed language while Spanish is classified as syllable-timed. Examining the contrasting development of rhythmic patterns in bilingual first language acquisition should provide information on how this differentiation takes place. As part of a longitudinal study, speech samples were taken of a Spanish/English bilingual child of Argentinean parents living in the Midwestern United States between the ages of 1;8 and 3;2. Spanish is spoken at home and English input comes primarily from an English day care the child attends 5 days a week. The parents act as interlocutors for Spanish recordings with a native speaker interacting with the child for the English recordings. Following the work of Grabe, Post and Watson (1999) and Grabe and Low (2002) a normalized Pairwise Variability Index (PVI) is used which compares, in utterances of minimally four syllables, the durations of vocalic intervals in successive syllables. Comparisons are then made between the rhythmic patterns of the child's productions within each language over time and between languages at comparable MLUs. Comparisons are also made with the rhythmic patterns of the adult productions of each language. Results will be analyzed for signs of native speaker-like rhythmic production in the child.

  5. Telerehabilitation, virtual therapists, and acquired neurologic speech and language disorders.

    PubMed

    Cherney, Leora R; van Vuuren, Sarel

    2012-08-01

    Telerehabilitation (telerehab) offers cost-effective services that potentially can improve access to care for those with acquired neurologic communication disorders. However, regulatory issues including licensure, reimbursement, and threats to privacy and confidentiality hinder the routine implementation of telerehab services into the clinical setting. Despite these barriers, rapid technological advances and a growing body of research regarding the use of telerehab applications support its use. This article reviews the evidence related to acquired neurologic speech and language disorders in adults, focusing on studies that have been published since 2000. Research studies have used telerehab systems to assess and treat disorders including dysarthria, apraxia of speech, aphasia, and mild Alzheimer disease. They show that telerehab is a valid and reliable vehicle for delivering speech and language services. The studies represent a progression of technological advances in computing, Internet, and mobile technologies. They range on a continuum from working synchronously (in real-time) with a speech-language pathologist to working asynchronously (offline) with a stand-in virtual therapist. One such system that uses a virtual therapist for the treatment of aphasia, the Web-ORLA™ (Rehabilitation Institute of Chicago, Chicago, IL) system, is described in detail. Future directions for the advancement of telerehab for clinical practice are discussed. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  6. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    ERIC Educational Resources Information Center

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  7. A Program That Acquires Language Using Positive and Negative Feedback.

    ERIC Educational Resources Information Center

    Brand, James

    1987-01-01

    Describes the language learning program "Acquire," which is a sample of grammar induction. It is a learning algorithm based on a pattern-matching scheme, using both a positive and negative network to reduce overgeneration. Language learning programs may be useful as tutorials for learning the syntax of a foreign language. (Author/LMO)

  8. Reading faces: investigating the use of a novel face-based orthography in acquired alexia.

    PubMed

    Moore, Michelle W; Brendel, Paul C; Fiez, Julie A

    2014-02-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic "FaceFont" orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a "linguistic bridge" into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Reading faces: Investigating the use of a novel face-based orthography in acquired alexia

    PubMed Central

    Moore, Michelle W.; Brendel, Paul C.; Fiez, Julie A.

    2014-01-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic “FaceFont” orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a “linguistic bridge” into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. PMID:24463310

  10. ERP measures of syllable processing in 1 year olds: infant diet- and gender-related differences

    USDA-ARS?s Scientific Manuscript database

    Language skills are generally better in females than males, but the basis for these differences has not been determined. To investigate whether variations in infant diet contribute to these differences, cortical responses to the syllable /pa/ (ERPs;124 sites) were examined in healthy 12-month-old, f...

  11. Learning of Syllable-Object Relations by Preverbal Infants: The Role of Temporal Synchrony and Syllable Distinctiveness

    ERIC Educational Resources Information Center

    Gogate, Lakshmi J.

    2010-01-01

    The role of temporal synchrony and syllable distinctiveness in preverbal infants' learning of word-object relations was investigated. In Experiment 1, 7- and 8-month-olds (N=64) were habituated under conditions where two "similar-sounding" syllables, /tah/ and /gah/, were spoken simultaneously with the motions of one of two sets of…

  12. Electrophysiological and hemodynamic mismatch responses in rats listening to human speech syllables.

    PubMed

    Mahmoudzadeh, Mahdi; Dehaene-Lambertz, Ghislaine; Wallois, Fabrice

    2017-01-01

    Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG) and hemodynamic responses (using fNIRS) to series of human speech syllables and investigated the brain response to a change of consonant (ba vs. ga) and to a change of voice (male vs. female). Both methods revealed concordant results, although ECoG measures were more sensitive than fNIRS. Responses to syllables were bilateral, but with marked right-hemispheric lateralization. Responses to voice changes were observed with both methods, while only ECoG was sensitive to consonant changes. These results suggest that rats more effectively processed the speech envelope than fine temporal cues in contrast with human preterm neonates, in whom the opposite effects were observed. Cross-species comparisons constitute a very valuable tool to define the singularities of the human brain and species-specific bias that may help human infants to learn their native language.

  13. Role of the motor system in language knowledge.

    PubMed

    Berent, Iris; Brem, Anna-Katharine; Zhao, Xu; Seligson, Erica; Pan, Hong; Epstein, Jane; Stern, Emily; Galaburda, Albert M; Pascual-Leone, Alvaro

    2015-02-17

    All spoken languages express words by sound patterns, and certain patterns (e.g., blog) are systematically preferred to others (e.g., lbog). What principles account for such preferences: does the language system encode abstract rules banning syllables like lbog, or does their dislike reflect the increased motor demands associated with speech production? More generally, we ask whether linguistic knowledge is fully embodied or whether some linguistic principles could potentially be abstract. To address this question, here we gauge the sensitivity of English speakers to the putative universal syllable hierarchy (e.g., blif ≻ bnif ≻ bdif ≻ lbif) while undergoing transcranial magnetic stimulation (TMS) over the cortical motor representation of the left orbicularis oris muscle. If syllable preferences reflect motor simulation, then worse-formed syllables (e.g., lbif) should (i) elicit more errors; (ii) engage more strongly motor brain areas; and (iii) elicit stronger effects of TMS on these motor regions. In line with the motor account, we found that repetitive TMS pulses impaired participants' global sensitivity to the number of syllables, and functional MRI confirmed that the cortical stimulation site was sensitive to the syllable hierarchy. Contrary to the motor account, however, ill-formed syllables were least likely to engage the lip sensorimotor area and they were least impaired by TMS. Results suggest that speech perception automatically triggers motor action, but this effect is not causally linked to the computation of linguistic structure. We conclude that the language and motor systems are intimately linked, yet distinct. Language is designed to optimize motor action, but its knowledge includes principles that are disembodied and potentially abstract.

  14. Role of the motor system in language knowledge

    PubMed Central

    Berent, Iris; Brem, Anna-Katharine; Zhao, Xu; Seligson, Erica; Pan, Hong; Epstein, Jane; Stern, Emily; Galaburda, Albert M.; Pascual-Leone, Alvaro

    2015-01-01

    All spoken languages express words by sound patterns, and certain patterns (e.g., blog) are systematically preferred to others (e.g., lbog). What principles account for such preferences: does the language system encode abstract rules banning syllables like lbog, or does their dislike reflect the increased motor demands associated with speech production? More generally, we ask whether linguistic knowledge is fully embodied or whether some linguistic principles could potentially be abstract. To address this question, here we gauge the sensitivity of English speakers to the putative universal syllable hierarchy (e.g., blif≻bnif≻bdif≻lbif) while undergoing transcranial magnetic stimulation (TMS) over the cortical motor representation of the left orbicularis oris muscle. If syllable preferences reflect motor simulation, then worse-formed syllables (e.g., lbif) should (i) elicit more errors; (ii) engage more strongly motor brain areas; and (iii) elicit stronger effects of TMS on these motor regions. In line with the motor account, we found that repetitive TMS pulses impaired participants’ global sensitivity to the number of syllables, and functional MRI confirmed that the cortical stimulation site was sensitive to the syllable hierarchy. Contrary to the motor account, however, ill-formed syllables were least likely to engage the lip sensorimotor area and they were least impaired by TMS. Results suggest that speech perception automatically triggers motor action, but this effect is not causally linked to the computation of linguistic structure. We conclude that the language and motor systems are intimately linked, yet distinct. Language is designed to optimize motor action, but its knowledge includes principles that are disembodied and potentially abstract. PMID:25646465

  15. Brain correlates of stuttering and syllable production. A PET performance-correlation analysis.

    PubMed

    Fox, P T; Ingham, R J; Ingham, J C; Zamarripa, F; Xiong, J H; Lancaster, J L

    2000-10-01

    To distinguish the neural systems of normal speech from those of stuttering, PET images of brain blood flow were probed (correlated voxel-wise) with per-trial speech-behaviour scores obtained during PET imaging. Two cohorts were studied: 10 right-handed men who stuttered and 10 right-handed, age- and sex-matched non-stuttering controls. Ninety PET blood flow images were obtained in each cohort (nine per subject as three trials of each of three conditions) from which r-value statistical parametric images (SPI¿r¿) were computed. Brain correlates of stutter rate and syllable rate showed striking differences in both laterality and sign (i.e. positive or negative correlations). Stutter-rate correlates, both positive and negative, were strongly lateralized to the right cerebral and left cerebellar hemispheres. Syllable correlates in both cohorts were bilateral, with a bias towards the left cerebral and right cerebellar hemispheres, in keeping with the left-cerebral dominance for language and motor skills typical of right-handed subjects. For both stutters and syllables, the brain regions that were correlated positively were those of speech production: the mouth representation in the primary motor cortex; the supplementary motor area; the inferior lateral premotor cortex (Broca's area); the anterior insula; and the cerebellum. The principal difference between syllable-rate and stutter-rate positive correlates was hemispheric laterality. A notable exception to this rule was that cerebellar positive correlates for syllable rate were far more extensive in the stuttering cohort than in the control cohort, which suggests a specific role for the cerebellum in enabling fluent utterances in persons who stutter. Stutters were negatively correlated with right-cerebral regions (superior and middle temporal gyrus) associated with auditory perception and processing, regions which were positively correlated with syllables in both the stuttering and control cohorts. These findings

  16. Lingual Kinematics during Rapid Syllable Repetition in Parkinson's Disease

    ERIC Educational Resources Information Center

    Wong, Min Ney; Murdoch, Bruce E.; Whelan, Brooke-Mai

    2012-01-01

    Background: Rapid syllable repetition tasks are commonly used in the assessment of motor speech disorders. However, little is known about the articulatory kinematics during rapid syllable repetition in individuals with Parkinson's disease (PD). Aims: To investigate and compare lingual kinematics during rapid syllable repetition in dysarthric…

  17. Planning and Articulation in Incremental Word Production: Syllable-Frequency Effects in English

    ERIC Educational Resources Information Center

    Cholin, Joana; Dell, Gary S.; Levelt, Willem J. M.

    2011-01-01

    We investigated the role of syllables during speech planning in English by measuring syllable-frequency effects. So far, syllable-frequency effects in English have not been reported. English has poorly defined syllable boundaries, and thus the syllable might not function as a prominent unit in English speech production. Speakers produced either…

  18. Effects of prosody and position on the timing of deictic gestures.

    PubMed

    Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M; Szuminsky, Neil

    2013-04-01

    In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. The authors manipulated syllable position and contrastive stress in compound words in multiword utterances by using a repeated-measures design to investigate the degree of synchronization of speech and pointing gestures produced by 15 American English speakers. Acoustic measures were compared with the gesture movement recorded via capacitance. Although most participants began a gesture before the target word, the temporal parameters of the gesture changed as a function of syllable position and prosody. Syllables with contrastive stress in the 2nd position of compound words were the longest in duration and also most consistently affected the timing of gestures, as measured by several dependent measures. Increasing the stress of a syllable significantly affected the timing of a corresponding gesture, notably for syllables in the 2nd position of words that would not typically be stressed. The findings highlight the need to consider the interaction of gestures and spoken language production from a motor-based perspective of coordination.

  19. Beyond Single Syllables: The Effect of First Syllable Frequency and Orthographic Similarity on Eye Movements during Silent Reading

    ERIC Educational Resources Information Center

    Hawelka, Stefan; Schuster, Sarah; Gagl, Benjamin; Hutzler, Florian

    2013-01-01

    The study assessed the eye movements of 60 adult German readers during silent reading of target words, consisting of two and three syllables, embedded in sentences. The first objective was to assess whether the inhibitory effect of first syllable frequency, which was up to now primarily shown for isolated words, generalises to natural reading. The…

  20. Learning metathesis: Evidence for syllable structure constraints.

    PubMed

    Finley, Sara

    2017-02-01

    One of the major questions in the cognitive science of language is whether the perceptual and phonological motivations for the rules and patterns that govern the sounds of language are a part of the psychological reality of grammatical representations. This question is particularly important in the study of phonological patterns - systematic constraints on the representation of sounds, because phonological patterns tend to be grounded in phonetic constraints. This paper focuses on phonological metathesis, which occurs when two adjacent sounds switch positions (e.g., ca st pronounced as ca ts ). While many cases of phonological metathesis appear to be motivated by constraints on syllable structure, it is possible that these metathesis patterns are merely artifacts of historical change, and do not represent the linguistic knowledge of the speaker (Blevins & Garrett, 1998). Participants who were exposed to a metathesis pattern that can be explained in terms of structural or perceptual improvement were less likely to generalize to metathesis patterns that did not show the same improvements. These results support a substantively biased theory in which phonological patterns are encoded in terms of structurally motivated constraints.

  1. Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages.

    PubMed

    Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart

    2016-01-01

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure-regularities arising in an ordered series of syllable timings-testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.

  2. Comparison of spectrographic records of two syllables pronounced from scripts in hiragana and romaji by students with different familiarity with English.

    PubMed

    Ototake, Harumi; Yamada, Jun

    2005-10-01

    The same syllables /mu/ and /ra/ written in Japanese hiragana and romaji given on a standard speeded naming task elicited phonetically or acoustically different responses in a syllabic hiragana condition and a romaji condition. The participants were two groups of Japanese college students (ns = 15 and 16) with different familiarity with English as a second language. The results suggested that the phonetic reality of syllables represented in these scripts can differ, depending on the interaction between the kind of script and speakers' orthographic familiarity.

  3. Oral and Hand Movement Speeds are Associated with Expressive Language Ability in Children with Speech Sound Disorder

    PubMed Central

    Peter, Beate

    2013-01-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities. PMID:22411590

  4. Oral and hand movement speeds are associated with expressive language ability in children with speech sound disorder.

    PubMed

    Peter, Beate

    2012-12-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities.

  5. Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages

    PubMed Central

    Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart

    2016-01-01

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint. PMID:27994544

  6. Reading of kana (phonetic symbols for syllables) in Japanese children with spastic diplegia and periventricular leukomalacia.

    PubMed

    Yokochi, K

    2000-01-01

    In 31 Japanese children with spastic diplegia and periventricular leukomalacia (PVL), the age at which they could read Hiragana (phonetic symbols for syllables) and psychometric data were examined. Reading of Hiragana was achieved between 2 and 8 years of age in all subjects except one. Four children could read Hiragana at 2 to 3 years of age, an age which is considered early among Japanese children. Performance IQs of the Wechsler Scale were lower than Verbal IQs in 18 of 19 children who were administered this test, and DQs of the cognitive adaptive (C-A) area of the K-form developmental test (a popular test in Japan) were lower than those of the language social area in all 12 children taking this test. Among eight children having performance IQs or DQs of C-A less than 50, seven acquired reading ability of Hiragana at 8 years of age or below. A visuoperceptual disorder manifested by diplegic children with PVL does not affect the acquisition of Kana-reading ability.

  7. Learning metathesis: Evidence for syllable structure constraints

    PubMed Central

    Finley, Sara

    2016-01-01

    One of the major questions in the cognitive science of language is whether the perceptual and phonological motivations for the rules and patterns that govern the sounds of language are a part of the psychological reality of grammatical representations. This question is particularly important in the study of phonological patterns – systematic constraints on the representation of sounds, because phonological patterns tend to be grounded in phonetic constraints. This paper focuses on phonological metathesis, which occurs when two adjacent sounds switch positions (e.g., cast pronounced as cats). While many cases of phonological metathesis appear to be motivated by constraints on syllable structure, it is possible that these metathesis patterns are merely artifacts of historical change, and do not represent the linguistic knowledge of the speaker (Blevins & Garrett, 1998). Participants who were exposed to a metathesis pattern that can be explained in terms of structural or perceptual improvement were less likely to generalize to metathesis patterns that did not show the same improvements. These results support a substantively biased theory in which phonological patterns are encoded in terms of structurally motivated constraints. PMID:28082764

  8. Factors Influencing Sensitivity to Lexical Tone in an Artificial Language: Implications for Second Language Learning

    ERIC Educational Resources Information Center

    Caldwell-Harris, Catherine L.; Lancaster, Alia; Ladd, D. Robert; Dediu, Dan; Christiansen, Morten H.

    2015-01-01

    This study examined whether musical training, ethnicity, and experience with a natural tone language influenced sensitivity to tone while listening to an artificial tone language. The language was designed with three tones, modeled after level-tone African languages. Participants listened to a 15-min random concatenation of six 3-syllable words.…

  9. Syllable Durations of Preword and Early Word Vocalizations.

    ERIC Educational Resources Information Center

    Robb, Michael P.; Saxman, John H.

    1990-01-01

    The continuity in development of syllable duration patterns was examined in seven young children as they progressed from preword to multiword periods of vocalization development. Results revealed no systematic increase or decrease in the duration of bisyllables produced by the children as a group, whereas lengthening of final syllables was…

  10. Influence of syllable structure on L2 auditory word learning.

    PubMed

    Hamada, Megumi; Goya, Hideki

    2015-04-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.

  11. On the Locus of the Syllable Frequency Effect in Speech Production

    ERIC Educational Resources Information Center

    Laganaro, Marina; Alario, F. -Xavier

    2006-01-01

    The observation of a syllable frequency effect in naming latencies has been an argument in favor of a functional role of stored syllables in speech production. Accordingly, various theoretical models postulate that a repository of syllable representations is accessed during phonetic encoding. However, the direct empirical evidence for locating the…

  12. Proximate Units in Word Production: Phonological Encoding Begins with Syllables in Mandarin Chinese but with Segments in English

    ERIC Educational Resources Information Center

    O'Seaghdha, Padraig G.; Chen, Jenn-Yeu; Chen, Train-Min

    2010-01-01

    In Mandarin Chinese, speakers benefit from fore-knowledge of what the first syllable but not of what the first phonemic segment of a disyllabic word will be (Chen, Chen, & Dell, 2002), contrasting with findings in English, Dutch, and other Indo-European languages, and challenging the generality of current theories of word production. In this…

  13. Syllable-related breathing in infants in the second year of life.

    PubMed

    Parham, Douglas F; Buder, Eugene H; Oller, D Kimbrough; Boliek, Carol A

    2011-08-01

    This study explored whether breathing behaviors of infants within the 2nd year of life differ between tidal breathing and breathing supporting single unarticulated syllables and canonical/articulated syllables. Vocalizations and breathing kinematics of 9 infants between 53 and 90 weeks of age were recorded. A strict selection protocol was used to identify analyzable breath cycles. Syllables were categorized on the basis of consensus coding. Inspiratory and expiratory durations, excursions, and slopes were calculated for the 3 breath cycle types and were normalized using mean tidal breath measures. Tidal breathing cycles were significantly different from syllable-related cycles on all breathing measures. There were no significant differences between unarticulated syllable cycles and canonical syllable cycles, even after controlling for utterance duration and sound pressure level. Infants in the 2nd year of life exhibit clear differences between tidal breathing and speech-related breathing, but categorically distinct breath support for syllable types with varying articulatory demands was not evident in the present findings. Speech development introduces increasingly complex utterances, so older infants may produce detectable articulation-related adaptations of breathing kinematics. For younger infants, breath support may vary systematically among utterance types, due more to phonatory variations than to articulatory demands.

  14. Syllable-Related Breathing in Infants in the Second Year of Life

    PubMed Central

    Parham, Douglas F.; Buder, Eugene H.; Oller, D. Kimbrough; Boliek, Carol A.

    2010-01-01

    Purpose This study explored whether breathing behaviors of infants within the second year of life differ between tidal breathing and breathing supporting single unarticulated syllables and canonical/articulated syllables. Method Vocalizations and breathing kinematics of nine infants between 53 and 90 weeks of age were recorded. A strict selection protocol was used to identify analyzable breath cycles. Syllables were categorized based on consensus coding. Inspiratory and expiratory durations, excursions, and slopes were calculated for the three breath cycle types and normalized using mean tidal breath measures. Results Tidal breathing cycles were significantly different from syllable-related cycles on all breathing measures. There were no significant differences between unarticulated syllable cycles and canonical syllable cycles, even after controlling for utterance duration and sound pressure level. Conclusions Infants in the second year of life exhibit clear differences between tidal breathing and speech-related breathing, but categorically distinct breath support for syllable types with varying articulatory demands was not evident in the current findings. Speech development introduces increasingly complex utterances, so older infants may produce detectable articulation-related adaptations of breathing kinematics. For younger infants, breath support may vary systematically among utterance types, due more to phonatory variations than to articulatory demands. PMID:21173390

  15. Acquiring and processing verb argument structure: distributional learning in a miniature language.

    PubMed

    Wonnacott, Elizabeth; Newport, Elissa L; Tanenhaus, Michael K

    2008-05-01

    Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings.

  16. Flexibility of orthographic and graphomotor coordination during a handwritten copy task: effect of time pressure

    PubMed Central

    Sausset, Solen; Lambert, Eric; Olive, Thierry

    2013-01-01

    The coordination of the various processes involved in language production is a subject of keen debate in writing research. Some authors hold that writing processes can be flexibly coordinated according to task demands, whereas others claim that process coordination is entirely inflexible. For instance, orthographic planning has been shown to be resource-dependent during handwriting, but inflexible in typing, even under time pressure. The present study therefore went one step further in studying flexibility in the coordination of orthographic processing and graphomotor execution, by measuring the impact of time pressure during a handwritten copy task. Orthographic and graphomotor processes were observed via syllable processing. Writers copied out two- and three-syllable words three times in a row, with and without time pressure. Latencies and letter measures at syllable boundaries were analyzed. We hypothesized that if coordination is flexible and varies according to task demands, it should be modified by time pressure, affecting both latency before execution and duration of execution. We therefore predicted that the extent of syllable processing before execution would be reduced under time pressure and, as a consequence, syllable effects during execution would be more salient. Results showed, however, that time pressure interacted neither with syllable number nor with syllable structure. Accordingly, syllable processing appears to remain the same regardless of time pressure. The flexibility of process coordination during handwriting is discussed, as is the operationalization of time pressure constraints. PMID:24319435

  17. Frames of Reference in Spatial Memories Acquired From Language

    ERIC Educational Resources Information Center

    Mou, Weimin; Zhang, Kan; McNamara, Timothy P.

    2004-01-01

    Four experiments examined reference systems in spatial memories acquired from language. Participants read narratives that located 4 objects in canonical (front, back, left, right) or noncanonical (left front, right front, left back, right back) positions around them. Participants' focus of attention was first set on each of the 4 objects, and then…

  18. Language at Three Timescales: The Role of Real-Time Processes in Language Development and Evolution.

    PubMed

    McMurray, Bob

    2016-04-01

    Evolutionary developmental systems (evo-devo) theory stresses that selection pressures operate on entire developmental systems rather than just genes. This study extends this approach to language evolution, arguing that selection pressure may operate on two quasi-independent timescales. First, children clearly must acquire language successfully (as acknowledged in traditional evo-devo accounts) and evolution must equip them with the tools to do so. Second, while this is developing, they must also communicate with others in the moment using partially developed knowledge. These pressures may require different solutions, and their combination may underlie the evolution of complex mechanisms for language development and processing. I present two case studies to illustrate how the demands of both real-time communication and language acquisition may be subtly different (and interact). The first case study examines infant-directed speech (IDS). A recent view is that IDS underwent cultural to statistical learning mechanisms that infants use to acquire the speech categories of their language. However, recent data suggest is it may not have evolved to enhance development, but rather to serve a more real-time communicative function. The second case study examines the argument for seemingly specialized mechanisms for learning word meanings (e.g., fast-mapping). Both behavioral and computational work suggest that learning may be much slower and served by general-purpose mechanisms like associative learning. Fast-mapping, then, may be a real-time process meant to serve immediate communication, not learning, by augmenting incomplete vocabulary knowledge with constraints from the current context. Together, these studies suggest that evolutionary accounts consider selection pressure arising from both real-time communicative demands and from the need for accurate language development. Copyright © 2016 Cognitive Science Society, Inc.

  19. Masked syllable priming effects in word and picture naming in Chinese.

    PubMed

    You, Wenping; Zhang, Qingfang; Verdonschot, Rinus G

    2012-01-01

    Four experiments investigated the role of the syllable in Chinese spoken word production. Chen, Chen and Ferrand (2003) reported a syllable priming effect when primes and targets shared the first syllable using a masked priming paradigm in Chinese. Our Experiment 1 was a direct replication of Chen et al.'s (2003) Experiment 3 employing CV (e.g., ,/ba2.ying2/, strike camp) and CVG (e.g., ,/bai2.shou3/, white haired) syllable types. Experiment 2 tested the syllable priming effect using different syllable types: e.g., CV (,/qi4.qiu2/, balloon) and CVN (,/qing1.ting2/, dragonfly). Experiment 3 investigated this issue further using line drawings of common objects as targets that were preceded either by a CV (e.g., ,/qi3/, attempt), or a CVN (e.g., ,/qing2/, affection) prime. Experiment 4 further examined the priming effect by a comparison between CV or CVN priming and an unrelated priming condition using CV-NX (e.g., ,/mi2.ni3/, mini) and CVN-CX (e.g., ,/min2.ju1/, dwellings) as target words. These four experiments consistently found that CV targets were named faster when preceded by CV primes than when they were preceded by CVG, CVN or unrelated primes, whereas CVG or CVN targets showed the reverse pattern. These results indicate that the priming effect critically depends on the match between the structure of the prime and that of the first syllable of the target. The effect obtained in this study was consistent across different stimuli and different tasks (word and picture naming), and provides more conclusive and consistent data regarding the role of the syllable in Chinese speech production.

  20. Phonological memory in sign language relies on the visuomotor neural system outside the left hemisphere language network.

    PubMed

    Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi

    2017-01-01

    Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to

  1. Pitch Perception in Tone Language-Speaking Adults With and Without Autism Spectrum Disorders

    PubMed Central

    Cheng, Stella T. T.; Lam, Gary Y. H.

    2017-01-01

    Enhanced low-level pitch perception has been universally reported in autism spectrum disorders (ASD). This study examined whether tone language speakers with ASD exhibit this advantage. The pitch perception skill of 20 Cantonese-speaking adults with ASD was compared with that of 20 neurotypical individuals. Participants discriminated pairs of real syllable, pseudo-syllable (syllables that do not conform the phonotactic rules or are accidental gaps), and non-speech (syllables with attenuated high-frequency segmental content) stimuli contrasting pitch levels. The results revealed significantly higher discrimination ability in both groups for the non-speech stimuli than for the pseudo-syllables with one semitone difference. No significant group differences were noted. Different from previous findings, post hoc analysis found that enhanced pitch perception was observed in a subgroup of participants with ASD showing no history of delayed speech onset. The tone language experience may have modulated the pitch processing mechanism in the speakers in both ASD and non-ASD groups. PMID:28616150

  2. Analysis of consonant /s/ and syllables in Malay language using electropalatography

    NASA Astrophysics Data System (ADS)

    Zin, Syatirah Mat; Suhaimi, Fatanah M.; Noor, Siti Noor Fazliah Mohd; Ismail, Nurul Iffah; Zali, Nurulakma

    2016-12-01

    During articulation and speech, tongue is in contact with hard palate. This contact can be measured using an Electropalatography (EPG). EPG is widely used for speech therapy among subjects having cleft palate, congenital aglossia, and phonemic paraphasic disorder, and also for language analysis and therapy. This study aims to analyse the production of /s/ and syllables by a Malay adult speaker using the Reading EPG palate with 62 electrodes that are built to match the tongue-palate contact. The vowel used in this study are /a/, /i/ and /u/. Data was analysed using Articulate Assist 1.18 software. There are four parts of palate zones, namely alveolar, post-alveolar, palatal and velar. In this study, there are three normal Malay-speaking adults with age ranging from 30 to 35 years old (mean age of 32 years). In the production of /s/, the percentage contact for subject 1 is 16% at the left side and 32% at the right side. For subject 2, there is 39% in the left side and 32% in the right side. Subject 3 has 13% contact in the left side and 29% in the right side. In the production of /as/ percentage contact is 19% at the left side and 29% at the right side for subject 1.19% at the left side and 16% at the right side for subject 2. While subject 3 has 23% contact at the left side and 35% at the right side. In the production of /is/, subject 1 has shown 32% at both side, 3% at the left side and 10 at the right side for subject 2, and 19% at the left side and 32% at the right side for subject 3. In the production of /us/, percentage contact for left and right side are 42% and 26% for subject 1. Subject 2 showed a symmetrical contact at both side whereas subject 3 has 16% at the left side and 26% at the right side. Results indicate that combination of consonant /s/ with different vowels produce different tongue and palate contacts patttern.

  3. On the edge of language acquisition: inherent constraints on encoding multisyllabic sequences in the neonate brain.

    PubMed

    Ferry, Alissa L; Fló, Ana; Brusini, Perrine; Cattarossi, Luigi; Macagno, Francesco; Nespor, Marina; Mehler, Jacques

    2016-05-01

    To understand language, humans must encode information from rapid, sequential streams of syllables - tracking their order and organizing them into words, phrases, and sentences. We used Near-Infrared Spectroscopy (NIRS) to determine whether human neonates are born with the capacity to track the positions of syllables in multisyllabic sequences. After familiarization with a six-syllable sequence, the neonate brain responded to the change (as shown by an increase in oxy-hemoglobin) when the two edge syllables switched positions but not when two middle syllables switched positions (Experiment 1), indicating that they encoded the syllables at the edges of sequences better than those in the middle. Moreover, when a 25 ms pause was inserted between the middle syllables as a segmentation cue, neonates' brains were sensitive to the change (Experiment 2), indicating that subtle cues in speech can signal a boundary, with enhanced encoding of the syllables located at the edges of that boundary. These findings suggest that neonates' brains can encode information from multisyllabic sequences and that this encoding is constrained. Moreover, subtle segmentation cues in a sequence of syllables provide a mechanism with which to accurately encode positional information from longer sequences. Tracking the order of syllables is necessary to understand language and our results suggest that the foundations for this encoding are present at birth. © 2015 John Wiley & Sons Ltd.

  4. The Role of the Syllable in the Segmentation of Cairene Spoken Arabic

    ERIC Educational Resources Information Center

    Aquil, Rajaa

    2012-01-01

    The syllable as a perceptual unit has been investigated cross linguistically. In Cairene Arabic syllables fall into three categories, light CV, heavy CVC/CVV and superheavy CVCC/CVVC. However, heavy syllables in Cariene Arabic have varied weight depending on their position in a word, whether internal or final. The present paper investigates the…

  5. Cross-linguistic differences in the use of durational cues for the segmentation of a novel language.

    PubMed

    Ordin, Mikhail; Polyanskaya, Leona; Laka, Itziar; Nespor, Marina

    2017-07-01

    It is widely accepted that duration can be exploited as phonological phrase final lengthening in the segmentation of a novel language, i.e., in extracting discrete constituents from continuous speech. The use of final lengthening for segmentation and its facilitatory effect has been claimed to be universal. However, lengthening in the world languages can also mark lexically stressed syllables. Stress-induced lengthening can potentially be in conflict with right edge phonological phrase boundary lengthening. Thus the processing of durational cues in segmentation can be dependent on the listener's linguistic background, e.g., on the specific correlates and unmarked location of lexical stress in the native language of the listener. We tested this prediction and found that segmentation by both German and Basque speakers is facilitated when lengthening is aligned with the word final syllable and is not affected by lengthening on either the penultimate or the antepenultimate syllables. Lengthening of the word final syllable, however, does not help Italian and Spanish speakers to segment continuous speech, and lengthening of the antepenultimate syllable impedes their performance. We have also found a facilitatory effect of penultimate lengthening on segmentation by Italians. These results confirm our hypothesis that processing of lengthening cues is not universal, and interpretation of lengthening as a phonological phrase final boundary marker in a novel language of exposure can be overridden by the phonology of lexical stress in the native language of the listener.

  6. The Effect of Syllable Repetition Rate on Vocal Characteristics

    ERIC Educational Resources Information Center

    Topbas, Oya; Orlikoff, Robert F.; St. Louis, Kenneth O.

    2012-01-01

    This study examined whether mean vocal fundamental frequency ("F"[subscript 0]) or speech sound pressure level (SPL) varies with changes in syllable repetition rate. Twenty-four young adults (12 M and 12 F) repeated the syllables/p[inverted v]/,/p[inverted v]t[schwa]/, and/p[inverted v]t[schwa]k[schwa]/at a modeled "slow" rate of approximately one…

  7. The Ortho-Syllable as a Processing Unit in Handwriting: The Mute E Effect

    ERIC Educational Resources Information Center

    Lambert, Eric; Sausset, Solen; Rigalleau, François

    2015-01-01

    Some research on written production has focused on the role of the syllable as a processing unit. However, the precise nature of this syllable unit has yet to be elucidated. The present study examined whether the nature of this processing unit is orthographic (i.e., an ortho-syllable) or phonological. We asked French adults to copy three-syllable…

  8. Subtlety of Ambient-Language Effects in Babbling: A Study of English- and Chinese-Learning Infants at 8, 10, and 12 Months

    PubMed Central

    Lee, Chia-Cheng; Jhang, Yuna; Chen, Li-mei; Relyea, George; Oller, D. Kimbrough

    2016-01-01

    Prior research on ambient-language effects in babbling has often suggested infants produce language-specific phonological features within the first year. These results have been questioned in research failing to find such effects and challenging the positive findings on methodological grounds. We studied English- and Chinese-learning infants at 8, 10, and 12 months and found listeners could not detect ambient-language effects in the vast majority of infant utterances, but only in items deemed to be words or to contain canonical syllables that may have made them sound like words with language-specific shapes. Thus, the present research suggests the earliest ambient-language effects may be found in emerging lexical items or in utterances influenced by language-specific features of lexical items. Even the ambient-language effects for infant canonical syllables and words were very small compared with ambient-language effects for meaningless but phonotactically well-formed syllable sequences spoken by adult native speakers of English and Chinese. PMID:28496393

  9. Multimodal imaging of temporal processing in typical and atypical language development.

    PubMed

    Kovelman, Ioulia; Wagley, Neelima; Hay, Jessica S F; Ugolini, Margaret; Bowyer, Susan M; Lajiness-O'Neill, Renee; Brennan, Jonathan

    2015-03-01

    New approaches to understanding language and reading acquisition propose that the human brain's ability to synchronize its neural firing rate to syllable-length linguistic units may be important to children's ability to acquire human language. Yet, little evidence from brain imaging studies has been available to support this proposal. Here, we summarize three recent brain imaging (functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG)) studies from our laboratories with young English-speaking children (aged 6-12 years). In the first study (fNIRS), we used an auditory beat perception task to show that, in children, the left superior temporal gyrus (STG) responds preferentially to rhythmic beats at 1.5 Hz. In the second study (fMRI), we found correlations between children's amplitude rise-time sensitivity, phonological awareness, and brain activation in the left STG. In the third study (MEG), typically developing children outperformed children with autism spectrum disorder in extracting words from rhythmically rich foreign speech and displayed different brain activation during the learning phase. The overall findings suggest that the efficiency with which left temporal regions process slow temporal (rhythmic) information may be important for gains in language and reading proficiency. These findings carry implications for better understanding of the brain's mechanisms that support language and reading acquisition during both typical and atypical development. © 2014 New York Academy of Sciences.

  10. Influence of Syllable Structure on L2 Auditory Word Learning

    ERIC Educational Resources Information Center

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  11. Acquisition of speech rhythm in a second language by learners with rhythmically different native languages.

    PubMed

    Ordin, Mikhail; Polyanskaya, Leona

    2015-08-01

    The development of speech rhythm in second language (L2) acquisition was investigated. Speech rhythm was defined as durational variability that can be captured by the interval-based rhythm metrics. These metrics were used to examine the differences in durational variability between proficiency levels in L2 English spoken by French and German learners. The results reveal that durational variability increased as L2 acquisition progressed in both groups of learners. This indicates that speech rhythm in L2 English develops from more syllable-timed toward more stress-timed patterns irrespective of whether the native language of the learner is rhythmically similar to or different from the target language. Although both groups showed similar development of speech rhythm in L2 acquisition, there were also differences: German learners achieved a degree of durational variability typical of the target language, while French learners exhibited lower variability than native British speakers, even at an advanced proficiency level.

  12. [Two cases of fluent aphasia with selective difficulty of syllable identification].

    PubMed

    Endo, K; Suzuki, K; Yamadori, A; Fujii, T; Tobita, M; Ohtake, H

    1999-10-01

    We report two aphasic patients who could discriminate Japanese syllables but could not identify them. Case 1 was a 51-year-old right handed woman with 12-year education. Case 2 was a 50-year-old right handed man with 9-year education. They developed fluent aphasia after a cerebral infarction. Brain MRI of case 1 revealed widely distributed lesions including inferior frontal, superior temporal, angular and supramarginal gyri. Lesions revealed by Brain CT in case 2 included the left superior and middle temporal, angular and supramarginal gyri. Both showed severe impairment of repetition and confrontation naming. No difference of performance was present between repetition of single syllables and polysyllabic words. On the contrary, oral reading of Kana characters were preserved. We examined their ability to perceive syllables in detail. In the discrimination task, they judged whether a pair of heard syllables was same or different. Case 1 was correct in 85% of the tasks and case 2 in 98%. In an identification task, they heard a syllable and chose a corresponding Kana, Kanji, or picture out of 10 respective candidates. Case 1 was correct only in 30% and case 2 in 50% of these tasks. On the other hand, selection of a correct target in response to a polysyllabic word was much better, i.e. 70% in case 1 and 90% in case 2. Based on these data we concluded that (1) syllabic identification is a different process from syllabic discrimination, and (2) comprehension of a polysyllabic word can be achieved even when the precise phonological analysis of continuent syllables are impaired.

  13. Imaging auditory representations of song and syllables in populations of sensorimotor neurons essential to vocal communication.

    PubMed

    Peh, Wendy Y X; Roberts, Todd F; Mooney, Richard

    2015-04-08

    Vocal communication depends on the coordinated activity of sensorimotor neurons important to vocal perception and production. How vocalizations are represented by spatiotemporal activity patterns in these neuronal populations remains poorly understood. Here we combined intracellular recordings and two-photon calcium imaging in anesthetized adult zebra finches (Taeniopygia guttata) to examine how learned birdsong and its component syllables are represented in identified projection neurons (PNs) within HVC, a sensorimotor region important for song perception and production. These experiments show that neighboring HVC PNs can respond at markedly different times to song playback and that different syllables activate spatially intermingled PNs within a local (~100 μm) region of HVC. Moreover, noise correlations were stronger between PNs that responded most strongly to the same syllable and were spatially graded within and between classes of PNs. These findings support a model in which syllabic and temporal features of song are represented by spatially intermingled PNs functionally organized into cell- and syllable-type networks within local spatial scales in HVC. Copyright © 2015 the authors 0270-6474/15/355589-17$15.00/0.

  14. Dichotic listening performance predicts language comprehension.

    PubMed

    Asbjørnsen, Arve E; Helland, Turid

    2006-05-01

    Dichotic listening performance is considered a reliable and valid procedure for the assessment of language lateralisation in the brain. However, the documentation of a relationship between language functions and dichotic listening performance is sparse, although it is accepted that dichotic listening measures language perception. In particular, language comprehension should show close correspondence to perception of language stimuli. In the present study, we tested samples of reading-impaired and normally achieving children between 10 and 13 years of age with tests of reading skills, language comprehension, and dichotic listening to consonant-vowel (CV) syllables. A high correlation between the language scores and the dichotic listening performance was expected. However, since the left ear score is believed to be an error when assessing language laterality, covariation was expected for the right ear scores only. In addition, directing attention to one ear input was believed to reduce the influence of random factors, and thus show a more concise estimate of left hemisphere language capacity. Thus, a stronger correlation between language comprehension skills and the dichotic listening performance when attending to the right ear was expected. The analyses yielded a positive correlation between the right ear score in DL and language comprehension, an effect that was stronger when attending to the right ear. The present results confirm the assumption that dichotic listening with CV syllables measures an aspect of language perception and language skills that is related to general language comprehension.

  15. Subband-Based Group Delay Segmentation of Spontaneous Speech into Syllable-Like Units

    NASA Astrophysics Data System (ADS)

    Nagarajan, T.; Murthy, H. A.

    2004-12-01

    In the development of a syllable-centric automatic speech recognition (ASR) system, segmentation of the acoustic signal into syllabic units is an important stage. Although the short-term energy (STE) function contains useful information about syllable segment boundaries, it has to be processed before segment boundaries can be extracted. This paper presents a subband-based group delay approach to segment spontaneous speech into syllable-like units. This technique exploits the additive property of the Fourier transform phase and the deconvolution property of the cepstrum to smooth the STE function of the speech signal and make it suitable for syllable boundary detection. By treating the STE function as a magnitude spectrum of an arbitrary signal, a minimum-phase group delay function is derived. This group delay function is found to be a better representative of the STE function for syllable boundary detection. Although the group delay function derived from the STE function of the speech signal contains segment boundaries, the boundaries are difficult to determine in the context of long silences, semivowels, and fricatives. In this paper, these issues are specifically addressed and algorithms are developed to improve the segmentation performance. The speech signal is first passed through a bank of three filters, corresponding to three different spectral bands. The STE functions of these signals are computed. Using these three STE functions, three minimum-phase group delay functions are derived. By combining the evidence derived from these group delay functions, the syllable boundaries are detected. Further, a multiresolution-based technique is presented to overcome the problem of shift in segment boundaries during smoothing. Experiments carried out on the Switchboard and OGI-MLTS corpora show that the error in segmentation is at most 25 milliseconds for 67% and 76.6% of the syllable segments, respectively.

  16. Syllable-Related Breathing in Infants in the Second Year of Life

    ERIC Educational Resources Information Center

    Parham, Douglas F.; Buder, Eugene H.; Oller, D. Kimbrough; Boliek, Carol A.

    2011-01-01

    Purpose: This study explored whether breathing behaviors of infants within the 2nd year of life differ between tidal breathing and breathing supporting single unarticulated syllables and canonical/articulated syllables. Method: Vocalizations and breathing kinematics of 9 infants between 53 and 90 weeks of age were recorded. A strict selection…

  17. Clinical Application of the Mean Babbling Level and Syllable Structure Level

    ERIC Educational Resources Information Center

    Morris, Sherrill R.

    2010-01-01

    Purpose: This clinical exchange reviews two independent phonological assessment measures: mean babbling level (MBL) and syllable structure level (SSL). Both measures summarize phonetic inventory and syllable shape in a calculated average and have been used in research to describe the phonological abilities of children ages 9 to 36 months. An…

  18. Two genetic loci control syllable sequences of ultrasonic courtship vocalizations in inbred mice

    PubMed Central

    2011-01-01

    Background The ultrasonic vocalizations (USV) of courting male mice are known to possess a phonetic structure with a complex combination of several syllables. The genetic mechanisms underlying the syllable sequence organization were investigated. Results This study compared syllable sequence organization in two inbred strains of mice, 129S4/SvJae (129) and C57BL6J (B6), and demonstrated that they possessed two mutually exclusive phenotypes. The 129S4/SvJae (129) strain frequently exhibited a "chevron-wave" USV pattern, which was characterized by the repetition of chevron-type syllables. The C57BL/6J strain produced a "staccato" USV pattern, which was characterized by the repetition of short-type syllables. An F1 strain obtained by crossing the 129S4/SvJae and C57BL/6J strains produced only the staccato phenotype. The chevron-wave and staccato phenotypes reappeared in the F2 generations, following the Mendelian law of independent assortment. Conclusions These results suggest that two genetic loci control the organization of syllable sequences. These loci were occupied by the staccato and chevron-wave alleles in the B6 and 129 mouse strains, respectively. Recombination of these alleles might lead to the diversity of USV patterns produced by mice. PMID:22018021

  19. Language-independent talker-specificity in first-language and second-language speech production by bilingual talkers: L1 speaking rate predicts L2 speaking rate

    PubMed Central

    Bradlow, Ann R.; Kim, Midam; Blasingame, Michael

    2017-01-01

    Second-language (L2) speech is consistently slower than first-language (L1) speech, and L1 speaking rate varies within- and across-talkers depending on many individual, situational, linguistic, and sociolinguistic factors. It is asked whether speaking rate is also determined by a language-independent talker-specific trait such that, across a group of bilinguals, L1 speaking rate significantly predicts L2 speaking rate. Two measurements of speaking rate were automatically extracted from recordings of read and spontaneous speech by English monolinguals (n = 27) and bilinguals from ten L1 backgrounds (n = 86): speech rate (syllables/second), and articulation rate (syllables/second excluding silent pauses). Replicating prior work, L2 speaking rates were significantly slower than L1 speaking rates both across-groups (monolinguals' L1 English vs bilinguals' L2 English), and across L1 and L2 within bilinguals. Critically, within the bilingual group, L1 speaking rate significantly predicted L2 speaking rate, suggesting that a significant portion of inter-talker variation in L2 speech is derived from inter-talker variation in L1 speech, and that individual variability in L2 spoken language production may be best understood within the context of individual variability in L1 spoken language production. PMID:28253679

  20. Defense Contracting: Actions Needed to Explore Additional Opportunities to Gain Efficiencies in Acquiring Foreign Language Support

    DTIC Science & Technology

    2013-02-25

    Directive 5160.41E, Defense Language Program . 10GAO-11-456. Page 5 GAO-13-251R Defense Contracting types of foreign language support that DOD has acquired...Language Transformation Roadmap, (January 2005), and Department of Defense Directive 5160.41E, Defense Language Program . Page 15 GAO-13-251R Defense...examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help

  1. Slavic Prosody: Language Change and Phonological Theory. Cambridge Studies in Linguistics 86.

    ERIC Educational Resources Information Center

    Bethin, Christina Yurkiw

    The history of Slavic prosody gives an account of Slavic languages at the time of their differentiation and relates these developments to issues in phonological theory. It is first argued that the syllable structure of Slavic changes before the fall of the jers and suggests that intra- and intersyllabic reorganization in Late Common Slavic was far…

  2. Learning across Languages: Bilingual Experience Supports Dual Language Statistical Word Segmentation

    ERIC Educational Resources Information Center

    Antovich, Dylan M.; Graf Estes, Katharine

    2018-01-01

    Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable-level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14-month-olds'…

  3. Perception of steady-state vowels and vowelless syllables by adults and children

    NASA Astrophysics Data System (ADS)

    Nittrouer, Susan

    2005-04-01

    Vowels can be produced as long, isolated, and steady-state, but that is not how they are found in natural speech. Instead natural speech consists of almost continuously changing (i.e., dynamic) acoustic forms from which mature listeners recover underlying phonetic form. Some theories suggest that children need steady-state information to recognize vowels (and so learn vowel systems), even though that information is sparse in natural speech. The current study examined whether young children can recover vowel targets from dynamic forms, or whether they need steady-state information. Vowel recognition was measured for adults and children (3, 5, and 7 years) for natural productions of /dæd/, /dUd/ /æ/, /U/ edited to make six stimulus sets: three dynamic (whole syllables; syllables with middle 50-percent replaced by cough; syllables with all but the first and last three pitch periods replaced by cough), and three steady-state (natural, isolated vowels; reiterated pitch periods from those vowels; reiterated pitch periods from the syllables). Adults scored nearly perfectly on all but first/last three pitch period stimuli. Children performed nearly perfectly only when the entire syllable was heard, and performed similarly (near 80%) for all other stimuli. Consequently, children need dynamic forms to perceive vowels; steady-state forms are not preferred.

  4. The Different Time Course of Phonotactic Constraint Learning in Children and Adults: Evidence from Speech Errors

    ERIC Educational Resources Information Center

    Smalle, Eleonore H. M.; Muylle, Merel; Szmalec, Arnaud; Duyck, Wouter

    2017-01-01

    Speech errors typically respect the speaker's implicit knowledge of language-wide phonotactics (e.g., /t/ cannot be a syllable onset in the English language). Previous work demonstrated that adults can learn novel experimentally induced phonotactic constraints by producing syllable strings in which the allowable position of a phoneme depends on…

  5. Cross-language comparisons of contextual variation in the production and perception of vowels

    NASA Astrophysics Data System (ADS)

    Strange, Winifred

    2005-04-01

    In the last two decades, a considerable amount of research has investigated second-language (L2) learners problems with perception and production of non-native vowels. Most studies have been conducted using stimuli in which the vowels are produced and presented in simple, citation-form (lists) monosyllabic or disyllabic utterances. In my laboratory, we have investigated the spectral (static/dynamic formant patterns) and temporal (syllable duration) variation in vowel productions as a function of speech-style (list/sentence utterances), speaking rate (normal/rapid), sentence focus (narrow focus/post-focus) and phonetic context (voicing/place of surrounding consonants). Data will be presented for a set of languages that include large and small vowel inventories, stress-, syllable-, and mora-timed prosody, and that vary in the phonological/phonetic function of vowel length, diphthongization, and palatalization. Results show language-specific patterns of contextual variation that affect the cross-language acoustic similarity of vowels. Research on cross-language patterns of perceived phonetic similarity by naive listeners suggests that listener's knowledge of native language (L1) patterns of contextual variation influences their L1/L2 similarity judgments and subsequently, their discrimination of L2 contrasts. Implications of these findings for assessing L2 learners perception of vowels and for developing laboratory training procedures to improve L2 vowel perception will be discussed. [Work supported by NIDCD.

  6. An Avian Basal Ganglia-Forebrain Circuit Contributes Differentially to Syllable Versus Sequence Variability of Adult Bengalese Finch Song

    PubMed Central

    Hampton, Cara M.; Sakata, Jon T.; Brainard, Michael S.

    2009-01-01

    Behavioral variability is important for motor skill learning but continues to be present and actively regulated even in well-learned behaviors. In adult songbirds, two types of song variability can persist and are modulated by social context: variability in syllable structure and variability in syllable sequencing. The degree to which the control of both types of adult variability is shared or distinct remains unknown. The output of a basal ganglia-forebrain circuit, LMAN (the lateral magnocellular nucleus of the anterior nidopallium), has been implicated in song variability. For example, in adult zebra finches, neurons in LMAN actively control the variability of syllable structure. It is unclear, however, whether LMAN contributes to variability in adult syllable sequencing because sequence variability in adult zebra finch song is minimal. In contrast, Bengalese finches retain variability in both syllable structure and syllable sequencing into adulthood. We analyzed the effects of LMAN lesions on the variability of syllable structure and sequencing and on the social modulation of these forms of variability in adult Bengalese finches. We found that lesions of LMAN significantly reduced the variability of syllable structure but not of syllable sequencing. We also found that LMAN lesions eliminated the social modulation of the variability of syllable structure but did not detect significant effects on the modulation of sequence variability. These results show that LMAN contributes differentially to syllable versus sequence variability of adult song and suggest that these forms of variability are regulated by distinct neural pathways. PMID:19357331

  7. In the Beginning Was the Rhyme? A Reflection on Hulme, Hatcher, Nation, Brown, Adams, and Stuart (2002).

    ERIC Educational Resources Information Center

    Goswami, Usha

    2002-01-01

    Describes phonological sensitivity at different grain sizes as a good predictor of reading acquisition in all languages. Presents information on development of phonological sensitivity for syllables, onsets, and rimes. Illustrates that phoneme-level skills develop fastest in children acquiring orthographically consistent languages with simple…

  8. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    PubMed

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  9. Speech perception and reading: two parallel modes of understanding language and implications for acquiring literacy naturally.

    PubMed

    Massaro, Dominic W

    2012-01-01

    I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.

  10. Within- and across-language spectral and temporal variability of vowels in different phonetic and prosodic contexts: Russian and Japanese

    NASA Astrophysics Data System (ADS)

    Gilichinskaya, Yana D.; Hisagi, Miwako; Law, Franzo F.; Berkowitz, Shari; Ito, Kikuyo

    2005-04-01

    Contextual variability of vowels in three languages with large vowel inventories was examined previously. Here, variability of vowels in two languages with small inventories (Russian, Japanese) was explored. Vowels were produced by three female speakers of each language in four contexts: (Vba) disyllables and in 3-syllable nonsense words (gaC1VC2a) embedded within carrier sentences; contexts included bilabial stops (bVp) in normal rate sentences and alveolar stops (dVt) in both normal and rapid rate sentences. Dependent variables were syllable durations and formant frequencies at syllable midpoint. Results showed very little variation across consonant and rate conditions in formants for /i/ in both languages. Japanese short /u, o, a/ showed fronting (F2 increases) in alveolar context relative to labial context (1.3-2.0 Barks), which was more pronounced in rapid sentences. Fronting of Japanese long vowels was less pronounced (0.3 to 0.9 Barks). Japanese long/short vowel ratios varied with speaking style (syllables versus sentences) and speaking rate. All Russian vowels except /i/ were fronted in alveolar vs labial context (1.1-3.1 Barks) but showed little change in either spectrum or duration with speaking rate. Comparisons of these patterns of variability with American English, French and German vowel results will be discussed.

  11. Syllable acoustics, temporal patterns, and call composition vary with behavioral context in Mexican free-tailed bats

    PubMed Central

    Bohn, Kirsten M.; Schmidt-French, Barbara; Ma, Sean T.; Pollak, George D.

    2008-01-01

    Recent research has shown that some bat species have rich vocal repertoires with diverse syllable acoustics. Few studies, however, have compared vocalizations across different behavioral contexts or examined the temporal emission patterns of vocalizations. In this paper, a comprehensive examination of the vocal repertoire of Mexican free-tailed bats, T. brasiliensis, is presented. Syllable acoustics and temporal emission patterns for 16 types of vocalizations including courtship song revealed three main findings. First, although in some cases syllables are unique to specific calls, other syllables are shared among different calls. Second, entire calls associated with one behavior can be embedded into more complex vocalizations used in entirely different behavioral contexts. Third, when different calls are composed of similar syllables, distinctive temporal emission patterns may facilitate call recognition. These results indicate that syllable acoustics alone do not likely provide enough information for call recognition; rather, the acoustic context and temporal emission patterns of vocalizations may affect meaning. PMID:19045674

  12. The Effects of Background Noise on Dichotic Listening to Consonant-Vowel Syllables

    ERIC Educational Resources Information Center

    Sequeira, Sarah Dos Santos; Specht, Karsten; Hamalainen, Heikki; Hugdahl, Kenneth

    2008-01-01

    Lateralization of verbal processing is frequently studied with the dichotic listening technique, yielding a so called right ear advantage (REA) to consonant-vowel (CV) syllables. However, little is known about how background noise affects the REA. To address this issue, we presented CV-syllables either in silence or with traffic background noise…

  13. On the Edge of Language Acquisition: Inherent Constraints on Encoding Multisyllabic Sequences in the Neonate Brain

    ERIC Educational Resources Information Center

    Ferry, Alissa L.; Fló, Ana; Brusini, Perrine; Cattarossi, Luigi; Macagno, Francesco; Nespor, Marina; Mehler, Jacques

    2016-01-01

    To understand language, humans must encode information from rapid, sequential streams of syllables--tracking their order and organizing them into words, phrases, and sentences. We used Near-Infrared Spectroscopy (NIRS) to determine whether human neonates are born with the capacity to track the positions of syllables in multisyllabic sequences.…

  14. Syllable Frequency Effects in Visual Word Recognition: Developmental Approach in French Children

    ERIC Educational Resources Information Center

    Maionchi-Pino, Norbert; Magnan, Annie; Ecalle, Jean

    2010-01-01

    This study investigates the syllable's role in the normal reading acquisition of French children at three grade levels (1st, 3rd, and 5th), using a modified version of Cole, Magnan, and Grainger's (1999) paradigm. We focused on the effects of syllable frequency and word frequency. The results suggest that from the first to third years of reading…

  15. Early second language acquisition: a comparison of the linguistic output of a pre-school child acquiring English as a second language with that of a monolingual peer.

    PubMed

    Letts, C A

    1991-08-01

    Two pre-school children were recorded at regular intervals over a 9-month period while playing freely together. One child was acquiring English as a second language, whilst the other was a monolingual English speaker. The sociolinguistic domain was such that the children were likely to be motivated to communicate with each other in English. A variety of quantitative measures were taken from the transcribed data, including measures of utterance type, length, type-token ratios, use of auxiliaries and morphology. The child for whom English was a second language was found to be well able to interact on equal terms with his partner, despite being somewhat less advanced in some aspects of English language development by the end of the sampling period. Whilst he appeared to be consolidating his language skills during this time, his monolingual partner appeared to be developing rapidly. It is hoped that normative longitudinal data of this kind will be of use in the accurate assessment of children from dual language backgrounds, who may be referred for speech and language therapy.

  16. Intersensory Redundancy and Seven-Month-Old Infants' Memory for Arbitrary Syllable-Object Relations.

    ERIC Educational Resources Information Center

    Gogate, Lakshmi J.; Bahrick, Lorraine E.

    Seven-month-old infants require redundant information such as temporal synchrony to learn arbitrary syllable-object relations. Infants learned the relations between spoken syllables, /a/ and /i/, and two moving objects only when temporal synchrony was present during habituation. Two experiments examined infants' memory for these relations. In…

  17. The Functional Unit in Phonological Encoding: Evidence for Moraic Representation in Native Japanese Speakers

    ERIC Educational Resources Information Center

    Kureta, Yoichi; Fushimi, Takao; Tatsumi, Itaru F.

    2006-01-01

    Speech production studies have shown that the phonological form of a word is made up of phonemic segments in stress-timed languages (e.g., Dutch) and of syllables in syllable timed languages (e.g., Chinese). To clarify the functional unit of mora-timed languages, the authors asked native Japanese speakers to perform an implicit priming task (A. S.…

  18. Evolution of language assessment in patients with acquired neurological disorders in Brazil

    PubMed Central

    Parente, Maria Alice de Mattos Pimenta; Baradel, Roberta Roque; Fonseca, Rochele Paz; Pereira, Natalie; Carthery-Goulart, Maria Teresa

    2014-01-01

    The objective of this paper was to describe the evolution of language assessments in patients with acquired neurological diseases over a period of around 45 years from 1970, when interdisciplinarity in Neuropsychology first began in Brazil, to the present day. The first twenty years of data was based on memories of Speech Pathology University Professors who were in charge of teaching aphasia. We then show the contributions of Linguistics, Cognitive Psychology, as well as Psycholinguistic and Psychometric criteria, to language evaluation. Finally, the current panorama of adaptations and creations of validated and standardized instruments is given, based on a search of the databases Pubmed, Scopus and Lilacs. Our closing remarks highlight the diversity in evaluation approaches and the recent tendency of language evaluations linked to new technologies such as brain imaging and computational analysis. PMID:29213904

  19. The Word Frequency Effect on Second Language Vocabulary Learning

    ERIC Educational Resources Information Center

    Koirala, Cesar

    2015-01-01

    This study examines several linguistic factors as possible contributors to perceived word difficulty in second language learners in an experimental setting. The investigated factors include: (1) frequency of word usage in the first language, (2) word length, (3) number of syllables in a word, and (4) number of consonant clusters in a word. Word…

  20. Phonological Similarity in American Sign Language.

    ERIC Educational Resources Information Center

    Hildebrandt, Ursula; Corina, David

    2002-01-01

    Investigates deaf and hearing subjects' ratings of American Sign Language (ASL) signs to assess whether linguistic experience shapes judgments of sign similarity. Findings are consistent with linguistic theories that posit movement and location as core structural elements of syllable structure in ASL. (Author/VWL)

  1. A Syllable Segmentation, Letter-Sound, and Initial-Sound Intervention with Students Who Are Deaf or Hard of Hearing and Use Sign Language

    ERIC Educational Resources Information Center

    Tucci, Stacey L.; Easterbrooks, Susan R.

    2015-01-01

    This study investigated children's acquisition of three aspects of an early literacy curriculum, "Foundations for Literacy" ("Foundations"), designed specifically for prekindergarten students who are deaf or hard of hearing (DHH): syllable segmentation, identification of letter-sound correspondences, and initial-sound…

  2. Two-Month-Old Infants' Sensitivity to Changes in Arbitrary Syllable-Object Pairings: The Role of Temporal Synchrony

    ERIC Educational Resources Information Center

    Gogate, Lakshmi J.; Prince, Christopher G.; Matatyaho, Dalit J.

    2009-01-01

    To explore early lexical development, the authors examined infants' sensitivity to changes in spoken syllables and objects given different temporal relations between syllable-object pairings. In Experiment 1, they habituated 2-month-olds to 1 syllable, /tah/ or /gah/, paired with an object in "synchronous" (utterances coincident with object…

  3. Syllable Transposition Effects in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen

    2015-01-01

    Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…

  4. Final Syllable Lengthening (FSL) in Infant Vocalizations.

    ERIC Educational Resources Information Center

    Nathani, Suneeti; Oller, D. Kimbrough; Cobo-Lewis, Alan B.

    2003-01-01

    Sought to verify research findings that suggest there may be a U-shaped developmental trajectory for final syllable lengthening (FSL). Attempted to determine whether vocal maturity and deafness influence FSL . Eight normally hearing infants and eight deaf infants were examined at three levels of prelinguistic vocal development. (Author/VWL)

  5. Exploring the role of hand gestures in learning novel phoneme contrasts and vocabulary in a second language

    PubMed Central

    Kelly, Spencer D.; Hirata, Yukari; Manansala, Michael; Huang, Jessica

    2014-01-01

    Co-speech hand gestures are a type of multimodal input that has received relatively little attention in the context of second language learning. The present study explored the role that observing and producing different types of gestures plays in learning novel speech sounds and word meanings in an L2. Naïve English-speakers were taught two components of Japanese—novel phonemic vowel length contrasts and vocabulary items comprised of those contrasts—in one of four different gesture conditions: Syllable Observe, Syllable Produce, Mora Observe, and Mora Produce. Half of the gestures conveyed intuitive information about syllable structure, and the other half, unintuitive information about Japanese mora structure. Within each Syllable and Mora condition, half of the participants only observed the gestures that accompanied speech during training, and the other half also produced the gestures that they observed along with the speech. The main finding was that participants across all four conditions had similar outcomes in two different types of auditory identification tasks and a vocabulary test. The results suggest that hand gestures may not be well suited for learning novel phonetic distinctions at the syllable level within a word, and thus, gesture-speech integration may break down at the lowest levels of language processing and learning. PMID:25071646

  6. Pathways Seen for Acquiring Languages

    ERIC Educational Resources Information Center

    Sparks, Sarah D.

    2010-01-01

    New studies on how language learning occurs are beginning to chip away at some long-held notions about second-language acquisition and point to potential learning benefits for students who speak more than one language. New National Science Foundation-funded collaborations among educators, cognitive scientists, neuroscientists, psychologists, and…

  7. Left and right basal ganglia and frontal activity during language generation: contributions to lexical, semantic, and phonological processes.

    PubMed

    Crosson, Bruce; Benefield, Hope; Cato, M Allison; Sadek, Joseph R; Moore, Anna Bacon; Wierenga, Christina E; Gopinath, Kaundinya; Soltysik, David; Bauer, Russell M; Auerbach, Edward J; Gökçay, Didem; Leonard, Christiana M; Briggs, Richard W

    2003-11-01

    fMRI was used to determine the frontal, basal ganglia, and thalamic structures engaged by three facets of language generation: lexical status of generated items, the use of semantic vs. phonological information during language generation, and rate of generation. During fMRI, 21 neurologically normal subjects performed four tasks: generation of nonsense syllables given beginning and ending consonant blends, generation of words given a rhyming word, generation of words given a semantic category at a fast rate (matched to the rate of nonsense syllable generation), and generation of words given a semantic category at a slow rate (matched to the rate of generating of rhyming words). Components of a left pre-SMA-dorsal caudate nucleus-ventral anterior thalamic loop were active during word generation from rhyming or category cues but not during nonsense syllable generation. Findings indicate that this loop is involved in retrieving words from pre-existing lexical stores. Relatively diffuse activity in the right basal ganglia (caudate nucleus and putamen) also was found during word-generation tasks but not during nonsense syllable generation. Given the relative absence of right frontal activity during the word generation tasks, we suggest that the right basal ganglia activity serves to suppress right frontal activity, preventing right frontal structures from interfering with language production. Current findings establish roles for the left and the right basal ganglia in word generation. Hypotheses are discussed for future research to help refine our understanding of basal ganglia functions in language generation.

  8. Acoustic-Emergent Phonology in the Amplitude Envelope of Child-Directed Speech

    PubMed Central

    Leong, Victoria; Goswami, Usha

    2015-01-01

    When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes). Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM) patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH) model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS). Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz), syllables (Syllable AM, ~5 Hz) and onset-rime units (Phoneme AM, ~20 Hz). We argue that these AM patterns could in principle be used by naïve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words), syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72–82% (freely-read CDS) and 90–98% (rhythmically-regular CDS) stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP) theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across

  9. Acoustic-Emergent Phonology in the Amplitude Envelope of Child-Directed Speech.

    PubMed

    Leong, Victoria; Goswami, Usha

    2015-01-01

    When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes). Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM) patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH) model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS). Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz), syllables (Syllable AM, ~5 Hz) and onset-rime units (Phoneme AM, ~20 Hz). We argue that these AM patterns could in principle be used by naïve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words), syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72-82% (freely-read CDS) and 90-98% (rhythmically-regular CDS) stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP) theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across

  10. Relationship between the change of language symptoms and the change of regional cerebral blood flow in the recovery process of two children with acquired aphasia.

    PubMed

    Kozuka, Junko; Uno, Akira; Matsuda, Hiroshi; Toyoshima, Yoshiya; Hamano, Shin-Ichiro

    2017-06-01

    This study aimed to investigate the relationship between the change of language symptoms and the change of regional cerebral blood flow (rCBF) in the recovery process of two children with acquired aphasia caused by infarctions from Moyamoya disease with an onset age of 8years. We compared the results for the Standard Language Test of Aphasia (SLTA) with rCBF changes in 7 language regions in the left hemisphere and their homologous regions in the right hemisphere at 4 time points from 3weeks for up to 5years after the onset of aphasia, while controlling for the effect of age. In both cases, strong correlations were seen within a hemisphere between adjacent regions or regions that are connected by neuronal fibers, and between some language regions in the left hemisphere and their homologous regions in the right hemisphere. Conversely, there were differences between the two cases in the time course of rCBF changes during their recovery process. Consistent with previous studies, the current study suggested that both hemispheres were involved in the long-term recovery of language symptoms in children with acquired aphasia. We suggest that the differences between both cases during their recovery process might be influenced by the brain states before aphasia, by which hemisphere was affected, and by the timing of the surgical revascularization procedure. However, the changes were observed in the data obtained for rCBF with strong correlations with the changes in language performance, so it is possible that rCBF could be used as a biomarker for language symptom changes. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  11. Brain responses in 4-month-old infants are already language specific.

    PubMed

    Friederici, Angela D; Friedrich, Manuela; Christophe, Anne

    2007-07-17

    Language is the most important faculty that distinguishes humans from other animals. Infants learn their native language fast and effortlessly during the first years of life, as a function of the linguistic input in their environment. Behavioral studies reported the discrimination of melodic contours [1] and stress patterns [2, 3] in 1-4-month-olds. Behavioral [4, 5] and brain measures [6-8] have shown language-independent discrimination of phonetic contrasts at that age. Language-specific discrimination, however, has been reported for phonetic contrasts only for 6-12-month-olds [9-12]. Here we demonstrate language-specific discrimination of stress patterns in 4-month-old German and French infants by using electrophysiological brain measures. We compare the processing of disyllabic words differing in their rhythmic structure, mimicking German words being stressed on the first syllable, e.g., pápa/daddy[13], and French ones being stressed on the second syllable, e.g., papá/daddy. Event-related brain potentials reveal that experience with German and French differentially affects the brain responses of 4-month-old infants, with each language group displaying a processing advantage for the rhythmic structure typical in its native language. These data indicate language-specific neural representations of word forms in the infant brain as early as 4 months of age.

  12. Implementation of Three Text to Speech Systems for Kurdish Language

    NASA Astrophysics Data System (ADS)

    Bahrampour, Anvar; Barkhoda, Wafa; Azami, Bahram Zahir

    Nowadays, concatenative method is used in most modern TTS systems to produce artificial speech. The most important challenge in this method is choosing appropriate unit for creating database. This unit must warranty smoothness and high quality speech, and also, creating database for it must reasonable and inexpensive. For example, syllable, phoneme, allophone, and, diphone are appropriate units for all-purpose systems. In this paper, we implemented three synthesis systems for Kurdish language based on syllable, allophone, and diphone and compare their quality using subjective testing.

  13. Words and possible words in early language acquisition.

    PubMed

    Marchetto, Erika; Bonatti, Luca L

    2013-11-01

    In order to acquire language, infants must extract its building blocks-words-and master the rules governing their legal combinations from speech. These two problems are not independent, however: words also have internal structure. Thus, infants must extract two kinds of information from the same speech input. They must find the actual words of their language. Furthermore, they must identify its possible words, that is, the sequences of sounds that, being morphologically well formed, could be words. Here, we show that infants' sensitivity to possible words appears to be more primitive and fundamental than their ability to find actual words. We expose 12- and 18-month-old infants to an artificial language containing a conflict between statistically coherent and structurally coherent items. We show that 18-month-olds can extract possible words when the familiarization stream contains marks of segmentation, but cannot do so when the stream is continuous. Yet, they can find actual words from a continuous stream by computing statistical relationships among syllables. By contrast, 12-month-olds can find possible words when familiarized with a segmented stream, but seem unable to extract statistically coherent items from a continuous stream that contains minimal conflicts between statistical and structural information. These results suggest that sensitivity to word structure is in place earlier than the ability to analyze distributional information. The ability to compute nontrivial statistical relationships becomes fully effective relatively late in development, when infants have already acquired a considerable amount of linguistic knowledge. Thus, mechanisms for structure extraction that do not rely on extensive sampling of the input are likely to have a much larger role in language acquisition than general-purpose statistical abilities. Copyright © 2013. Published by Elsevier Inc.

  14. Syllable Structure in Dysfunctional Portuguese Children's Speech

    ERIC Educational Resources Information Center

    Candeias, Sara; Perdigao, Fernando

    2010-01-01

    The goal of this work is to investigate whether children with speech dysfunctions (SD) show a deficit in planning some Portuguese syllable structures (PSS) in continuous speech production. Knowledge of which aspects of speech production are affected by SD is necessary for efficient improvement in the therapy techniques. The case-study is focused…

  15. "… Trial and error …": Speech-language pathologists' perspectives of working with Indigenous Australian adults with acquired communication disorders.

    PubMed

    Cochrane, Frances Clare; Brown, Louise; Siyambalapitiya, Samantha; Plant, Christopher

    2016-10-01

    This study explored speech-language pathologists' (SLPs) perspectives about factors that influence clinical management of Aboriginal and Torres Strait Islander adults with acquired communication disorders (e.g. aphasia, motor speech disorders). Using a qualitative phenomenological approach, seven SLPs working in North Queensland, Australia with experience working with this population participated in semi-structured in-depth interviews. Qualitative content analysis was used to identify categories and overarching themes within the data. Four categories, in relation to barriers and facilitators, were identified from participants' responses: (1) The Practice Context; (2) Working Together; (3) Client Factors; and (4) Speech-Language Pathologist Factors. Three overarching themes were also found to influence effective speech pathology services: (1) Aboriginal and Torres Strait Islander Cultural Practices; (2) Information and Communication; and (3) Time. This study identified many complex and inter-related factors which influenced SLPs' effective clinical management of this caseload. The findings suggest that SLPs should employ a flexible, holistic and collaborative approach in order to facilitate effective clinical management with Aboriginal and Torres Strait Islander people with acquired communication disorders.

  16. Implicit Segmentation of a Stream of Syllables Based on Transitional Probabilities: An MEG Study

    ERIC Educational Resources Information Center

    Teinonen, Tuomas; Huotilainen, Minna

    2012-01-01

    Statistical segmentation of continuous speech, i.e., the ability to utilise transitional probabilities between syllables in order to detect word boundaries, is reflected in the brain's auditory event-related potentials (ERPs). The N1 and N400 ERP components are typically enhanced for word onsets compared to random syllables during active…

  17. Fluctuations in Unimanual Hand Preference in Infants Following the Onset of Duplicated Syllable Babbling.

    ERIC Educational Resources Information Center

    Ramsay, Douglas S.

    1985-01-01

    Infants were tested for unimanual handedness at weekly intervals for a 14-week period beginning with the week of onset of duplicated syllable babbling. Group analyses indicating effects of sex and/or birth order on fluctuations and date review for individual infants suggested considerable variability across infants in occurrence and/or timing of…

  18. Selective auditory attention in adults: effects of rhythmic structure of the competing language.

    PubMed

    Reel, Leigh Ann; Hicks, Candace Bourland

    2012-02-01

    The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Reception thresholds for English sentences were measured for 50 monolingual English-speaking adults in conditions with 2-talker (male-female) competing speech spoken in a stress-based (English, German), syllable-based (Spanish, French), or mora-based (Japanese) language. Two different masking signals were created for each language (i.e., 2 different 2-talker pairs). All subjects were tested in 10 competing conditions (2 conditions for each of the 5 languages). A significant difference was noted between the 2 masking signals within each language. Across languages, significantly greater listening difficulty was observed in conditions where competing speech was spoken in English, German, or Japanese, as compared with Spanish or French. Results suggest that (a) for a particular language, masking effectiveness can vary between different male-female 2-talker maskers and (b) for stress-based vs. syllable-based languages, competing speech is more difficult to ignore when spoken in a language from the native rhythmic class as compared with a nonnative rhythmic class, regardless of whether the language is familiar or unfamiliar to the listener.

  19. A Prominence Account of Syllable Reduction in Early Speech Development: The Child's Prosodic Phonology of "Tiger" and "Giraffe."

    ERIC Educational Resources Information Center

    Snow, David

    1998-01-01

    This paper tested a theory of syllable prominence with 11 children (ages 11 to 26 months). The theory proposes that syllable prominence is a product of two orthogonal suprasegmental systems: stress/accent peaks and phrase boundaries. Use of the developed prominence scale found it parsimoniously accounted for observed biases in syllable omissions…

  20. Syllables and bigrams: orthographic redundancy and syllabic units affect visual word recognition at different processing levels.

    PubMed

    Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M

    2009-04-01

    Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.

  1. Oral-diadochokinesis rates across languages: English and Hebrew norms.

    PubMed

    Icht, Michal; Ben-David, Boaz M

    2014-01-01

    Oro-facial and speech motor control disorders represent a variety of speech and language pathologies. Early identification of such problems is important and carries clinical implications. A common and simple tool for gauging the presence and severity of speech motor control impairments is oral-diadochokinesis (oral-DDK). Surprisingly, norms for adult performance are missing from the literature. The goals of this study were: (1) to establish a norm for oral-DDK rate for (young to middle-age) adult English speakers, by collecting data from the literature (five studies, N=141); (2) to investigate the possible effect of language (and culture) on oral-DDK performance, by analyzing studies conducted in other languages (five studies, N=140), alongside the English norm; and (3) to find a new norm for adult Hebrew speakers, by testing 115 speakers. We first offer an English norm with a mean of 6.2syllables/s (SD=.8), and a lower boundary of 5.4syllables/s that can be used to indicate possible abnormality. Next, we found significant differences between four tested languages (English, Portuguese, Farsi and Greek) in oral-DDK rates. Results suggest the need to set language and culture sensitive norms for the application of the oral-DDK task world-wide. Finally, we found the oral-DDK performance for adult Hebrew speakers to be 6.4syllables/s (SD=.8), not significantly different than the English norms. This implies possible phonological similarities between English and Hebrew. We further note that no gender effects were found in our study. We recommend using oral-DDK as an important tool in the speech language pathologist's arsenal. Yet, application of this task should be done carefully, comparing individual performance to a set norm within the specific language. Readers will be able to: (1) identify the Speech-Language Pathologist assessment process using the oral-DDK task, by comparing an individual performance to the present English norm, (2) describe the impact of language

  2. Age of language acquisition and cortical language organization in multilingual patients undergoing awake brain mapping.

    PubMed

    Fernández-Coello, Alejandro; Havas, Viktória; Juncadella, Montserrat; Sierpowska, Joanna; Rodríguez-Fornells, Antoni; Gabarrós, Andreu

    2017-06-01

    OBJECTIVE Most knowledge regarding the anatomical organization of multilingualism is based on aphasiology and functional imaging studies. However, the results have still to be validated by the gold standard approach, namely electrical stimulation mapping (ESM) during awake neurosurgical procedures. In this ESM study the authors describe language representation in a highly specific group of 13 multilingual individuals, focusing on how age of acquisition may influence the cortical organization of language. METHODS Thirteen patients who had a high degree of proficiency in multiple languages and were harboring lesions within the dominant, left hemisphere underwent ESM while being operated on under awake conditions. Demographic and language data were recorded in relation to age of language acquisition (for native languages and early- and late-acquired languages), neuropsychological pre- and postoperative language testing, the number and location of language sites, and overlapping distribution in terms of language acquisition time. Lesion growth patterns and histopathological characteristics, location, and size were also recorded. The distribution of language sites was analyzed with respect to age of acquisition and overlap. RESULTS The functional language-related sites were distributed in the frontal (55%), temporal (29%), and parietal lobes (16%). The total number of native language sites was 47. Early-acquired languages (including native languages) were represented in 97 sites (55 overlapped) and late-acquired languages in 70 sites (45 overlapped). The overlapping distribution was 20% for early-early, 71% for early-late, and 9% for late-late. The average lesion size (maximum diameter) was 3.3 cm. There were 5 fast-growing and 7 slow-growing lesions. CONCLUSIONS Cortical language distribution in multilingual patients is not homogeneous, and it is influenced by age of acquisition. Early-acquired languages have a greater cortical representation than languages acquired

  3. θ-Band and β-Band Neural Activity Reflects Independent Syllable Tracking and Comprehension of Time-Compressed Speech.

    PubMed

    Pefkou, Maria; Arnal, Luc H; Fontolan, Lorenzo; Giraud, Anne-Lise

    2017-08-16

    Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural γ activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of θ rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mechanism, e.g., involving β activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences. We recorded EEGs in human participants and found that neural activity in both θ and γ ranges was sensitive to syllabic rate. Phase patterns of slow neural activity consistently followed the syllabic rate (4-14 Hz), even when this rate went beyond the classical θ range (4-8 Hz). The power of θ activity increased linearly with syllabic rate but showed no sensitivity to comprehension. Conversely, the power of β (14-21 Hz) activity was insensitive to the syllabic rate, yet reflected comprehension on a single-trial basis. We found different long-range dynamics for θ and β activity, with β activity building up in time while more contextual information becomes available. This is consistent with the roles of θ and β activity in stimulus-driven versus endogenous mechanisms. These data show that speech comprehension is constrained by concurrent stimulus-driven θ and low-γ activity, and by endogenous β activity, but not primarily by the capacity of θ activity to track the syllabic rhythm. SIGNIFICANCE STATEMENT Speech comprehension partly depends on the ability of the auditory cortex to track syllable boundaries with θ-range neural oscillations. The reason comprehension drops when speech is accelerated could hence be because θ oscillations can no longer follow the syllabic rate. Here, we presented subjects with comprehensible and

  4. A Report of On-Going Research Aimed at Developing Unweighted and Weighted Syllable Lists.

    ERIC Educational Resources Information Center

    Sakiey, Elizabeth

    Knowing which syllables are most commonly used should aid in linguistic research and in the preparation of curriculum materials, particularly in reading. A research project has been undertaken to develop unweighted and weighted (by the frequency of the words in which they appear) syllable lists. At present, two of the project's three phases are…

  5. Native language shapes automatic neural processing of speech.

    PubMed

    Intartaglia, Bastien; White-Schwoch, Travis; Meunier, Christine; Roman, Stéphane; Kraus, Nina; Schön, Daniele

    2016-08-01

    The development of the phoneme inventory is driven by the acoustic-phonetic properties of one's native language. Neural representation of speech is known to be shaped by language experience, as indexed by cortical responses, and recent studies suggest that subcortical processing also exhibits this attunement to native language. However, most work to date has focused on the differences between tonal and non-tonal languages that use pitch variations to convey phonemic categories. The aim of this cross-language study is to determine whether subcortical encoding of speech sounds is sensitive to language experience by comparing native speakers of two non-tonal languages (French and English). We hypothesized that neural representations would be more robust and fine-grained for speech sounds that belong to the native phonemic inventory of the listener, and especially for the dimensions that are phonetically relevant to the listener such as high frequency components. We recorded neural responses of American English and French native speakers, listening to natural syllables of both languages. Results showed that, independently of the stimulus, American participants exhibited greater neural representation of the fundamental frequency compared to French participants, consistent with the importance of the fundamental frequency to convey stress patterns in English. Furthermore, participants showed more robust encoding and more precise spectral representations of the first formant when listening to the syllable of their native language as compared to non-native language. These results align with the hypothesis that language experience shapes sensory processing of speech and that this plasticity occurs as a function of what is meaningful to a listener. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. The perception of syllable affiliation of singleton stops in repetitive speech.

    PubMed

    de Jong, Kenneth J; Lim, Byung-Jin; Nagao, Kyoko

    2004-01-01

    Stetson (1951) noted that repeating singleton coda consonants at fast speech rates makes them be perceived as onset consonants affiliated with a following vowel. The current study documents the perception of rate-induced resyllabification, as well as what temporal properties give rise to the perception of syllable affiliation. Stimuli were extracted from a previous study of repeated stop + vowel and vowel + stop syllables (de Jong, 2001a, 2001b). Forced-choice identification tasks show that slow repetitions are clearly distinguished. As speakers increase rate, they reach a point after which listeners disagree as to the affiliation of the stop. This pattern is found for voiced and voiceless consonants using different stimulus extraction techniques. Acoustic models of the identifications indicate that the sudden shift in syllabification occurs with the loss of an acoustic hiatus between successive syllables. Acoustic models of the fast rate identifications indicate various other qualities, such as consonant voicing, affect the probability that the consonants will be perceived as onsets. These results indicate a model of syllabic affiliation where specific juncture-marking aspects of the signal dominate parsing, and in their absence other differences provide additional, weaker cues to syllabic affiliation.

  7. In Search of Yesterday's Words: Reactivating a Long-Forgotten Language.

    ERIC Educational Resources Information Center

    de Bot, Kees; Stoessel, Saskia

    2000-01-01

    Addresses the fate of languages acquired during childhood that have not been used in a long time to find out if they are lost, overridden by other languages acquired later, or maintained despite a lack of use. German subjects were tested for their knowledge of Dutch, which they acquired as a second language during childhood. (Author/VWL)

  8. Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition.

    PubMed

    R, Elakkiya; K, Selvamani

    2017-09-22

    Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.

  9. Basic auditory processing and sensitivity to prosodic structure in children with specific language impairments: a new look at a perceptual hypothesis

    PubMed Central

    Cumming, Ruth; Wilson, Angela; Goswami, Usha

    2015-01-01

    Children with specific language impairments (SLIs) show impaired perception and production of spoken language, and can also present with motor, auditory, and phonological difficulties. Recent auditory studies have shown impaired sensitivity to amplitude rise time (ART) in children with SLIs, along with non-speech rhythmic timing difficulties. Linguistically, these perceptual impairments should affect sensitivity to speech prosody and syllable stress. Here we used two tasks requiring sensitivity to prosodic structure, the DeeDee task and a stress misperception task, to investigate this hypothesis. We also measured auditory processing of ART, rising pitch and sound duration, in both speech (“ba”) and non-speech (tone) stimuli. Participants were 45 children with SLI aged on average 9 years and 50 age-matched controls. We report data for all the SLI children (N = 45, IQ varying), as well as for two independent SLI subgroupings with intact IQ. One subgroup, “Pure SLI,” had intact phonology and reading (N = 16), the other, “SLI PPR” (N = 15), had impaired phonology and reading. Problems with syllable stress and prosodic structure were found for all the group comparisons. Both sub-groups with intact IQ showed reduced sensitivity to ART in speech stimuli, but the PPR subgroup also showed reduced sensitivity to sound duration in speech stimuli. Individual differences in processing syllable stress were associated with auditory processing. These data support a new hypothesis, the “prosodic phrasing” hypothesis, which proposes that grammatical difficulties in SLI may reflect perceptual difficulties with global prosodic structure related to auditory impairments in processing amplitude rise time and duration. PMID:26217286

  10. Conditioning of Attitudes Using a Backward Conditioning Paradigm. Language, Personality, Social and Cross-Cultural Study and Measurement of the Human A-R-D (Motivational) System.

    ERIC Educational Resources Information Center

    Brewer, Barbara A.; Gross, Michael C.

    In order to test whether meaning will transfer when a backward conditioning paradigm is utilized, Staats' language conditioning procedure, including the pairing of unconditioned stimulus (UCS) evaluative words with conditioned stimulus (CS) nonsense syllables, was modified so that the UCS words preceded the CS nonsense syllables on each trial.…

  11. Speech perception and phonological short-term memory capacity in language impairment: preliminary evidence from adolescents with specific language impairment (SLI) and autism spectrum disorders (ASD).

    PubMed

    Loucas, Tom; Riches, Nick Greatorex; Charman, Tony; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Baird, Gillian

    2010-01-01

    The cognitive bases of language impairment in specific language impairment (SLI) and autism spectrum disorders (ASD) were investigated in a novel non-word comparison task which manipulated phonological short-term memory (PSTM) and speech perception, both implicated in poor non-word repetition. This study aimed to investigate the contributions of PSTM and speech perception in non-word processing and whether individuals with SLI and ASD plus language impairment (ALI) show similar or different patterns of deficit in these cognitive processes. Three groups of adolescents (aged 14-17 years), 14 with SLI, 16 with ALI, and 17 age and non-verbal IQ matched typically developing (TD) controls, made speeded discriminations between non-word pairs. Stimuli varied in PSTM load (two- or four-syllables) and speech perception load (mismatches on a word-initial or word-medial segment). Reaction times showed effects of both non-word length and mismatch position and these factors interacted: four-syllable and word-initial mismatch stimuli resulted in the slowest decisions. Individuals with language impairment showed the same pattern of performance as those with typical development in the reaction time data. A marginal interaction between group and item length was driven by the SLI and ALI groups being less accurate with long items than short ones, a difference not found in the TD group. Non-word discrimination suggests that there are similarities and differences between adolescents with SLI and ALI and their TD peers. Reaction times appear to be affected by increasing PSTM and speech perception loads in a similar way. However, there was some, albeit weaker, evidence that adolescents with SLI and ALI are less accurate than TD individuals, with both showing an effect of PSTM load. This may indicate, at some level, the processing substrate supporting both PSTM and speech perception is intact in adolescents with SLI and ALI, but also in both there may be impaired access to PSTM resources.

  12. A Nonword Repetition Task for Speakers with Misarticulations: The Syllable Repetition Task (SRT)

    PubMed Central

    Shriberg, Lawrence D.; Lohmeier, Heather L.; Campbell, Thomas F.; Dollaghan, Christine A.; Green, Jordan R.; Moore, Christopher A.

    2010-01-01

    Purpose Conceptual and methodological confounds occur when non(sense) repetition tasks are administered to speakers who do not have the target speech sounds in their phonetic inventories or who habitually misarticulate targeted speech sounds. We describe a nonword repetition task, the Syllable Repetiton Task (SRT) that eliminates this confound and report findings from three validity studies. Method Ninety-five preschool children with Speech Delay and 63 with Typical Speech, completed an assessment battery that included the Nonword Repetition Task (NRT: Dollaghan & Campbell, 1998) and the SRT. SRT stimuli include only four of the earliest occurring consonants and one early occurring vowel. Results Study 1 findings indicated that the SRT eliminated the speech confound in nonword testing with speakers who misarticulate. Study 2 findings indicated that the accuracy of the SRT to identify expressive language impairment was comparable to findings for the NRT. Study 3 findings illustrated the SRT’s potential to interrogate speech processing constraints underlying poor nonword repetition accuracy. Results supported both memorial and auditory-perceptual encoding constraints underlying nonword repetition errors in children with speech-language impairment. Conclusion The SRT appears to be a psychometrically stable and substantively informative nonword repetition task for emerging genetic and other research with speakers who misarticulate. PMID:19635944

  13. The timing of language learning shapes brain structure associated with articulation.

    PubMed

    Berken, Jonathan A; Gracco, Vincent L; Chen, Jen-Kai; Klein, Denise

    2016-09-01

    We compared the brain structure of highly proficient simultaneous (two languages from birth) and sequential (second language after age 5) bilinguals, who differed only in their degree of native-like accent, to determine how the brain develops when a skill is acquired from birth versus later in life. For the simultaneous bilinguals, gray matter density was increased in the left putamen, as well as in the left posterior insula, right dorsolateral prefrontal cortex, and left and right occipital cortex. For the sequential bilinguals, gray matter density was increased in the bilateral premotor cortex. Sequential bilinguals with better accents also showed greater gray matter density in the left putamen, and in several additional brain regions important for sensorimotor integration and speech-motor control. Our findings suggest that second language learning results in enhanced brain structure of specific brain areas, which depends on whether two languages are learned simultaneously or sequentially, and on the extent to which native-like proficiency is acquired.

  14. Impact of language on development of auditory-visual speech perception.

    PubMed

    Sekiyama, Kaoru; Burnham, Denis

    2008-03-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.

  15. Onset of Duplicated Syllable Babbling and Unimanual Handedness in Infancy: Evidence for Developmental Change in Hemispheric Specialization?

    ERIC Educational Resources Information Center

    Ramsay, Douglas S.

    1984-01-01

    Examines the possible developmental relationship between unimanual handedness and duplicated syllable babbling. Thirty infants were tested at weekly intervals between five months of age and eight weeks after the onset of duplicated syllable babbling. Results suggest developmental change in hemispheric specialization or at least asymmetrical…

  16. A Longitudinal Study of Handwriting Skills in Pre-Schoolers: The Acquisition of Syllable Oriented Programming Strategies

    ERIC Educational Resources Information Center

    Soler Vilageliu, Olga; Kandel, Sonia

    2012-01-01

    Previous studies have shown the relevance of the syllable as a programming unit in handwriting production, both in adults and elementary school children. This longitudinal study focuses on the acquisition of writing skills in a group of preschoolers. It examines how and when the syllable structure of the word starts regulating motor programming in…

  17. Neural Language Processing in Adolescent First-Language Learners

    PubMed Central

    Ferjan Ramirez, Naja; Leonard, Matthew K.; Torres, Christina; Hatrak, Marla; Halgren, Eric; Mayberry, Rachel I.

    2014-01-01

    The relation between the timing of language input and development of neural organization for language processing in adulthood has been difficult to tease apart because language is ubiquitous in the environment of nearly all infants. However, within the congenitally deaf population are individuals who do not experience language until after early childhood. Here, we investigated the neural underpinnings of American Sign Language (ASL) in 2 adolescents who had no sustained language input until they were approximately 14 years old. Using anatomically constrained magnetoencephalography, we found that recently learned signed words mainly activated right superior parietal, anterior occipital, and dorsolateral prefrontal areas in these 2 individuals. This spatiotemporal activity pattern was significantly different from the left fronto-temporal pattern observed in young deaf adults who acquired ASL from birth, and from that of hearing young adults learning ASL as a second language for a similar length of time as the cases. These results provide direct evidence that the timing of language experience over human development affects the organization of neural language processing. PMID:23696277

  18. Asymmetries in Generalizing Alternations to and from Initial Syllables

    ERIC Educational Resources Information Center

    Becker, Michael; Nevins, Andrew; Levine, Jonathan

    2012-01-01

    In the English lexicon, laryngeal alternations in the plural (e.g. "leaf" ~ "leaves") impact monosyllables more than finally stressed polysyllables. This is the opposite of what happens typologically, and would thereby run contrary to the predictions of "initial-syllable faithfulness." Despite the lexical pattern, in a wug test we found…

  19. Science as a Second Language: Acquiring Fluency through Science Enterprises

    NASA Astrophysics Data System (ADS)

    Shope, R. E.

    2012-12-01

    , spelling, and speaking. Fluency results primarily from language acquisition and secondarily from language learning. We can view the problem of science education and communication as similar to language acquisition. Science Learning is a formal education process, the school science aspect of the school day: the direct teaching of standards-aligned science content. Science Acquisition is an informal process that occurs in the midst of exploring, solving problems, seeking answers to questions, playing, experimenting for pleasure, conversing, discussing, where the focus is not specifically on science content development, but on the inquiry activity, driven by the curiosity of the participant. Comprehensible input refers to the premise that we acquire language in the midst of activity when we understand the message; that is, when we understand what we hear or what we read or what we see. Acquisition is caused by comprehensible input as it occurs in the midst of a rich environment of language activity while doing something of interest to the learner. Providing comprehensible input is not the same as oversimplifying or "dumbing down." It is devising ways to create conditions where the interest of the learner is piqued.

  20. Science As A Second Language: Acquiring Fluency through Science Enterprises

    NASA Astrophysics Data System (ADS)

    Shope, R.; EcoVoices Expedition Team

    2013-05-01

    Science Enterprises are problems that students genuinely want to solve, questions that students genuinely want to answer, that naturally entail reading, writing, investigation, and discussion. Engaging students in personally-relevant science enterprises provides both a diagnostic opportunity and a context for providing students the comprehensible input they need. We can differentiate instruction by creating science enterprise zones that are set up for the incremental increase in challenge for the students. Comprehensible input makes reachable, those just-out-of-reach concepts in the mix of the familiar and the new. EcoVoices takes students on field research expeditions within an urban natural area, the San Gabriel River Discovery Center. This project engages students in science enterprises focused on understanding ecosystems, ecosystem services, and the dynamics of climate change. A sister program, EcoVoces, has been launched in Mexico, in collaboration with the Universidad Loyola del Pacífico. 1) The ED3U Science Inquiry Model, a learning cycle model that accounts for conceptual change: Explore { Diagnose, Design, Discuss } Use. 2) The ¿NQUIRY Wheel, a compass of scientific inquiry strategies; 3) Inquiry Science Expeditions, a way of laying out a science learning environment, emulating a field and lab research collaboratory; 4) The Science Educative Experience Scale, a diagnostic measure of the quality of the science learning experience; and 5) Mimedia de la Ciencia, participatory enactment of science concepts using techniques of mime and improvisational theater. BACKGROUND: Science has become a vehicle for teaching reading, writing, and other communication skills, across the curriculum. This new emphasis creates renewed motivation for Scientists and Science Educators to work collaboratively to explore the common ground between acquiring science understanding and language acquisition theory. Language Acquisition is an informal process that occurs in the midst of

  1. Impact of Language on Development of Auditory-Visual Speech Perception

    ERIC Educational Resources Information Center

    Sekiyama, Kaoru; Burnham, Denis

    2008-01-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…

  2. Segments and segmental properties in cross-language perception: Korean perception of English obstruents in various prosodic locations

    NASA Astrophysics Data System (ADS)

    de Jong, Kenneth; Silbert, Noah; Park, Hanyong

    2004-05-01

    Experimental models of cross-language perception and second-language acquisition (such as PAM and SLM) typically treat language differences in terms of whether the two languages share phonological segmental categories. Linguistic models, by contrast, generally examine properties which cross classify segments, such as features, rules, or prosodic constraints. Such models predict that perceptual patterns found for one segment will generalize to other segments of the same class. This paper presents perceptual identifications of Korean listeners to a set of voiced and voiceless English stops and fricatives in various prosodic locations to determine the extent to which such generality occurs. Results show some class-general effects; for example, voicing identification patterns generalize from stops, which occur in Korean, to nonsibilant fricatives, which are new to Korean listeners. However, when identification is poor, there are clear differences between segments within the same class. For example, in identifying stops and fricatives, both point of articulation and prosodic position bias perceptions; coronals are more often labeled fricatives, and syllable initial obstruents are more often labeled stops. These results suggest that class-general perceptual patterns are not a simple consequence of the structure of the perceptual system, but need to be acquired by factoring out within-class differences.

  3. Effects of blocking and presentation on the recognition of word and nonsense syllables in noise

    NASA Astrophysics Data System (ADS)

    Benkí, José R.

    2003-10-01

    Listener expectations may have significant effects on spoken word recognition, modulating word similarity effects from the lexicon. This study investigates the effect of blocking by lexical status on the recognition of word and nonsense syllables in noise. 240 phonemically matched word and nonsense CVC syllables [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84, 101-108 (1988)] were presented to listeners at different S/N ratios for identification. In the mixed condition, listeners were presented with blocks containing both words and nonwords, while listeners in the blocked condition were presented with the trials in blocks containing either words or nonwords. The targets were presented in isolation with 50 ms of preceding and following noise. Preliminary results indicate no effect of blocking on accuracy for either word or nonsense syllables; results from neighborhood density analyses will be presented. Consistent with previous studies, a j-factor analysis indicates that words are perceived as containing at least 0.5 fewer independent units than nonwords in both conditions. Relative to previous work on syllables presented in a frame sentence [Benkí, J. Acoust. Soc. Am. 113, 1689-1705 (2003)], initial consonants were perceived significantly less accurately, while vowels and final consonants were perceived at comparable rates.

  4. The transition from the core vowels to the following segments in Japanese children who stutter: the second, third and fourth syllables.

    PubMed

    Matsumoto-Shimamori, Sachiyo; Ito, Tomohiko; Fukuda, Suzy E; Fukuda, Shinji

    2011-09-01

    Shimamori and Ito (2007 , Syllable weight and phonological encoding in Japanese children who stutter. Japanese Journal of Special Education, 44, 451-462; 2008, Syllable weight and frequency of stuttering: Comparison between children who stutter with and without a family history of stuttering. Japanese Journal of Special Education, 45, 437-445; 2009, Difference in frequency of stuttering between light and heavy syllables in the production of monosyllables: From the viewpoint of phonetic transition. The Japanese Journal of Logopedics and Phoniatrics, 50, 116-122 (in Japanese)) proposed the hypothesis that in Japanese the transition from the core vowels (CVs) to the following segments affected the occurrence of stuttering. However, the transition we investigated was in the first syllables only, and the effect of the transition in second, third and fourth syllables was not addressed. The purpose of this study was to investigate whether the transition from the CVs in the second, third and fourth syllables affected the occurrence of stuttering. The participants were 21 Japanese children. A non-word naming task and a non-word reading task were used. The frequency of stuttering was not significantly different where the number of transitions from the CVs differed on either task. These results suggest that the transition from the CVs in the second, third and fourth syllables does not have a significant effect on the occurrence of stuttering in Japanese.

  5. Mapping the cortical representation of speech sounds in a syllable repetition task.

    PubMed

    Markiewicz, Christopher J; Bohland, Jason W

    2016-11-01

    Speech repetition relies on a series of distributed cortical representations and functional pathways. A speaker must map auditory representations of incoming sounds onto learned speech items, maintain an accurate representation of those items in short-term memory, interface that representation with the motor output system, and fluently articulate the target sequence. A "dorsal stream" consisting of posterior temporal, inferior parietal and premotor regions is thought to mediate auditory-motor representations and transformations, but the nature and activation of these representations for different portions of speech repetition tasks remains unclear. Here we mapped the correlates of phonetic and/or phonological information related to the specific phonemes and syllables that were heard, remembered, and produced using a series of cortical searchlight multi-voxel pattern analyses trained on estimates of BOLD responses from individual trials. Based on responses linked to input events (auditory syllable presentation), predictive vowel-level information was found in the left inferior frontal sulcus, while syllable prediction revealed significant clusters in the left ventral premotor cortex and central sulcus and the left mid superior temporal sulcus. Responses linked to output events (the GO signal cueing overt production) revealed strong clusters of vowel-related information bilaterally in the mid to posterior superior temporal sulcus. For the prediction of onset and coda consonants, input-linked responses yielded distributed clusters in the superior temporal cortices, which were further informative for classifiers trained on output-linked responses. Output-linked responses in the Rolandic cortex made strong predictions for the syllables and consonants produced, but their predictive power was reduced for vowels. The results of this study provide a systematic survey of how cortical response patterns covary with the identity of speech sounds, which will help to constrain

  6. Children creating language: how Nicaraguan sign language acquired a spatial grammar.

    PubMed

    Senghas, A; Coppola, M

    2001-07-01

    It has long been postulated that language is not purely learned, but arises from an interaction between environmental exposure and innate abilities. The innate component becomes more evident in rare situations in which the environment is markedly impoverished. The present study investigated the language production of a generation of deaf Nicaraguans who had not been exposed to a developed language. We examined the changing use of early linguistic structures (specifically, spatial modulations) in a sign language that has emerged since the Nicaraguan group first came together: In tinder two decades, sequential cohorts of learners systematized the grammar of this new sign language. We examined whether the systematicity being added to the language stems from children or adults: our results indicate that such changes originate in children aged 10 and younger Thus, sequential cohorts of interacting young children collectively: possess the capacity not only to learn, but also to create, language.

  7. ERP evidence for implicit L2 word stress knowledge in listeners of a fixed-stress language.

    PubMed

    Kóbor, Andrea; Honbolygó, Ferenc; Becker, Angelika B C; Schild, Ulrike; Csépe, Valéria; Friedrich, Claudia K

    2018-06-01

    Languages with contrastive stress, such as English or German, distinguish some words only via the stress status of their syllables, such as "CONtent" and "conTENT" (capitals indicate a stressed syllable). Listeners with a fixed-stress native language, such as Hungarian, have difficulties in explicitly discriminating variation of the stress position in a second language (L2). However, Event-Related Potentials (ERPs) indicate that Hungarian listeners implicitly notice variation from their native fixed-stress pattern. Here we used ERPs to investigate Hungarian listeners' implicit L2 processing. In a cross-modal word fragment priming experiment, we presented spoken stressed and unstressed German word onsets (primes) followed by printed versions of initially stressed and initially unstressed German words (targets). ERPs reflected stress priming exerted by both prime types. This indicates that Hungarian listeners implicitly linked German words with the stress status of the primes. Thus, the formerly described explicit stress discrimination difficulty associated with a fixed-stress native language does not generalize to implicit aspects of L2 word stress processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Slipped Lips: Onset Asynchrony Detection of Auditory-Visual Language in Autism

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Schneps, Matthew H.; Tager-Flusberg, Helen

    2009-01-01

    Background: It has frequently been suggested that individuals with autism spectrum disorder (ASD) have deficits in auditory-visual (AV) sensory integration. Studies of language integration have mostly used non-word syllables presented in congruent and incongruent AV combinations and demonstrated reduced influence of visual speech in individuals…

  9. Phase II trial of a syllable-timed speech treatment for school-age children who stutter.

    PubMed

    Andrews, Cheryl; O'Brian, Sue; Onslow, Mark; Packman, Ann; Menzies, Ross; Lowe, Robyn

    2016-06-01

    A recent clinical trial (Andrews et al., 2012) showed Syllable Timed Speech (STS) to be a potentially useful treatment agent for the reduction of stuttering for school-age children. The present trial investigated a modified version of this program that incorporated parent verbal contingencies. Participants were 22 stuttering children aged 6-11 years. Treatment involved training the children and their parents to use STS in conversation. Parents were also taught to use verbal contingencies in response to their child's stuttered and stutter-free speech and to praise their child's use of STS. Outcome assessments were conducted pre-treatment, at the completion of Stage 1 of the program and 6 months and 12 months after Stage 1 completion. Outcomes are reported for the 19 children who completed Stage 1 of the program. The group mean percent stuttering reduction was 77% from pre-treatment to 12 months post-treatment, and 82% with the two least responsive participants removed. There was considerable variation in response to the treatment. Eleven of the children showed reduced avoidance of speaking situations and 18 were more satisfied with their fluency post-treatment. However, there was some suggestion that stuttering control was not sufficient to fully eliminate situation avoidance for the children. The results of this trial are sufficiently encouraging to warrant further clinical trials of the method. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Time course of syllabic and sub-syllabic processing in Mandarin word production: Evidence from the picture-word interference paradigm.

    PubMed

    Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2017-06-05

    The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.

  11. A Cross-Language Study of Perception of Lexical Stress in English

    ERIC Educational Resources Information Center

    Yu, Vickie Y.; Andruski, Jean E.

    2010-01-01

    This study investigates the question of whether language background affects the perception of lexical stress in English. Thirty native English speakers and 30 native Chinese learners of English participated in a stressed-syllable identification task and a discrimination task involving three types of stimuli (real words/pseudowords/hums). The…

  12. Polar-phase indices of perioral muscle reciprocity during syllable production in Parkinson's disease.

    PubMed

    Chu, Shin Ying; Barlow, Steven M; Lee, Jaehoon; Wang, Jingyan

    2017-12-01

    This research characterised perioral muscle reciprocity and amplitude ratio in lower lip during bilabial syllable production [pa] at three rates to understand the neuromotor dynamics and scaling of motor speech patterns in individuals with Parkinson's disease (PD). Electromyographic (EMG) signals of the orbicularis oris superior [OOS], orbicularis oris inferior [OOI] and depressor labii inferioris [DLI] were recorded during syllable production and expressed as polar-phase notations. PD participants exhibited the general features of reciprocity between OOS, OOI and DLI muscles as reflected in the EMG during syllable production. The control group showed significantly higher integrated EMG amplitude ratio in the DLI:OOS muscle pairs than PD participants. No speech rate effects were found in EMG muscle reciprocity and amplitude magnitude across all muscle pairs. Similar patterns of muscle reciprocity in PD and controls suggest that corticomotoneuronal output to the facial nucleus and respective perioral muscles is relatively well-preserved in our cohort of mild idiopathic PD participants. Reduction of EMG amplitude ratio among PD participants is consistent with the putative reduction in the thalamocortical activation characteristic of this disease which limits motor cortex drive from generating appropriate commands which contributes to bradykinesia and hypokinesia of the orofacial mechanism.

  13. Towards an Auditory Account of Speech Rhythm: Application of a Model of the Auditory "Primal Sketch" to Two Multi-Language Corpora

    ERIC Educational Resources Information Center

    Lee, Christopher S.; Todd, Neil P. McAngus

    2004-01-01

    The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language…

  14. Mirror neurons, birdsong, and human language: a hypothesis.

    PubMed

    Levy, Florence

    2011-01-01

    THE MIRROR SYSTEM HYPOTHESIS AND INVESTIGATIONS OF BIRDSONG ARE REVIEWED IN RELATION TO THE SIGNIFICANCE FOR THE DEVELOPMENT OF HUMAN SYMBOLIC AND LANGUAGE CAPACITY, IN TERMS OF THREE FUNDAMENTAL FORMS OF COGNITIVE REFERENCE: iconic, indexical, and symbolic. Mirror systems are initially iconic but can progress to indexical reference when produced without the need for concurrent stimuli. Developmental stages in birdsong are also explored with reference to juvenile subsong vs complex stereotyped adult syllables, as an analogy with human language development. While birdsong remains at an indexical reference stage, human language benefits from the capacity for symbolic reference. During a pre-linguistic "babbling" stage, recognition of native phonemic categories is established, allowing further development of subsequent prefrontal and linguistic circuits for sequential language capacity.

  15. Mirror Neurons, Birdsong, and Human Language: A Hypothesis

    PubMed Central

    Levy, Florence

    2012-01-01

    The mirror system hypothesis and investigations of birdsong are reviewed in relation to the significance for the development of human symbolic and language capacity, in terms of three fundamental forms of cognitive reference: iconic, indexical, and symbolic. Mirror systems are initially iconic but can progress to indexical reference when produced without the need for concurrent stimuli. Developmental stages in birdsong are also explored with reference to juvenile subsong vs complex stereotyped adult syllables, as an analogy with human language development. While birdsong remains at an indexical reference stage, human language benefits from the capacity for symbolic reference. During a pre-linguistic “babbling” stage, recognition of native phonemic categories is established, allowing further development of subsequent prefrontal and linguistic circuits for sequential language capacity. PMID:22287950

  16. Language Aptitude: Desirable Trait or Acquirable Attribute?

    ERIC Educational Resources Information Center

    Singleton, David

    2017-01-01

    The traditional definition of language aptitude sees it as "an individual's initial state of readiness and capacity for learning a foreign language, and probable facility in doing so given the presence of motivation and opportunity" (Carroll, 1981, p. 86). This conception portrays language aptitude as a trait, in the sense of exhibiting…

  17. Input Frequency and the Acquisition of Syllable Structure in Polish

    ERIC Educational Resources Information Center

    Jarosz, Gaja; Calamaro, Shira; Zentz, Jason

    2017-01-01

    This article examines phonological development and its relationship to input statistics. Using novel data from a longitudinal corpus of spontaneous child speech in Polish, we evaluate and compare the predictions of a variety of input-based phonotactic models for syllable structure acquisition. We find that many commonly examined input statistics…

  18. [First language acquisition research and theories of language acquisition].

    PubMed

    Miller, S; Jungheim, M; Ptok, M

    2014-04-01

    In principle, a child can seemingly easily acquire any given language. First language acquisition follows a certain pattern which to some extent is found to be language independent. Since time immemorial, it has been of interest why children are able to acquire language so easily. Different disciplinary and methodological orientations addressing this question can be identified. A selective literature search in PubMed and Scopus was carried out and relevant monographies were considered. Different, partially overlapping phases can be distinguished in language acquisition research: whereas in ancient times, deprivation experiments were carried out to discover the "original human language", the era of diary studies began in the mid-19th century. From the mid-1920s onwards, behaviouristic paradigms dominated this field of research; interests were focussed on the determination of normal, average language acquisition. The subsequent linguistic period was strongly influenced by the nativist view of Chomsky and the constructivist concepts of Piaget. Speech comprehension, the role of speech input and the relevance of genetic disposition became the centre of attention. The interactionist concept led to a revival of the convergence theory according to Stern. Each of these four major theories--behaviourism, cognitivism, interactionism and nativism--have given valuable and unique impulses, but no single theory is universally accepted to provide an explanation of all aspects of language acquisition. Moreover, it can be critically questioned whether clinicians consciously refer to one of these theories in daily routine work and whether therapies are then based on this concept. It remains to be seen whether or not new theories of grammar, such as the so-called construction grammar (CxG), will eventually change the general concept of language acquisition.

  19. Acquiring variation in an artificial language: Children and adults are sensitive to socially conditioned linguistic variation.

    PubMed

    Samara, Anna; Smith, Kenny; Brown, Helen; Wonnacott, Elizabeth

    2017-05-01

    Languages exhibit sociolinguistic variation, such that adult native speakers condition the usage of linguistic variants on social context, gender, and ethnicity, among other cues. While the existence of this kind of socially conditioned variation is well-established, less is known about how it is acquired. Studies of naturalistic language use by children provide various examples where children's production of sociolinguistic variants appears to be conditioned on similar factors to adults' production, but it is difficult to determine whether this reflects knowledge of sociolinguistic conditioning or systematic differences in the input to children from different social groups. Furthermore, artificial language learning experiments have shown that children have a tendency to eliminate variation, a process which could potentially work against their acquisition of sociolinguistic variation. The current study used a semi-artificial language learning paradigm to investigate learning of the sociolinguistic cue of speaker identity in 6-year-olds and adults. Participants were trained and tested on an artificial language where nouns were obligatorily followed by one of two meaningless particles and were produced by one of two speakers (one male, one female). Particle usage was conditioned deterministically on speaker identity (Experiment 1), probabilistically (Experiment 2), or not at all (Experiment 3). Participants were given tests of production and comprehension. In Experiments 1 and 2, both children and adults successfully acquired the speaker identity cue, although the effect was stronger for adults and in Experiment 1. In addition, in all three experiments, there was evidence of regularization in participants' productions, although the type of regularization differed with age: children showed regularization by boosting the frequency of one particle at the expense of the other, while adults regularized by conditioning particle usage on lexical items. Overall, results

  20. Statistical Learning in a Natural Language by 8-Month-Old Infants

    PubMed Central

    Pelucchi, Bruna; Hay, Jessica F.; Saffran, Jenny R.

    2013-01-01

    Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants’ ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition. PMID:19489896

  1. Statistical learning in a natural language by 8-month-old infants.

    PubMed

    Pelucchi, Bruna; Hay, Jessica F; Saffran, Jenny R

    2009-01-01

    Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants' ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition.

  2. Interactive language learning by robots: the transition from babbling to word forms.

    PubMed

    Lyon, Caroline; Nehaniv, Chrystopher L; Saunders, Joe

    2012-01-01

    The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language

  3. Test Accommodations for English Language Learners Using the Student Language Assessment Plan

    ERIC Educational Resources Information Center

    Brantley, Sherri G.

    2014-01-01

    Public schools are attempting to work with a growing number of immigrant English language learners (ELLs) in the U.S. education system at a time when the No Child Left Behind (NCLB) Act has mandated that ELLs achieve proficiency on assessments even if they have not acquired sufficient language proficiency. The purpose of this qualitative case…

  4. The Weaker Language in Early Child Bilingualism: Acquiring a First Language as a Second Language?

    ERIC Educational Resources Information Center

    Meisel, Jurgen M.

    2007-01-01

    Past research demonstrates that first language (L1)-like competence in each language can be attained in simultaneous acquisition of bilingualism by mere exposure to the target languages. The question is whether this is also true for the "weaker" language (WL). The WL hypothesis claims that the WL differs fundamentally from monolingual L1 and…

  5. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  6. Attention Is Required for Knowledge-Based Sequential Grouping: Insights from the Integration of Syllables into Words.

    PubMed

    Ding, Nai; Pan, Xunyi; Luo, Cheng; Su, Naifei; Zhang, Wen; Zhang, Jianfeng

    2018-01-31

    How the brain groups sequential sensory events into chunks is a fundamental question in cognitive neuroscience. This study investigates whether top-down attention or specific tasks are required for the brain to apply lexical knowledge to group syllables into words. Neural responses tracking the syllabic and word rhythms of a rhythmic speech sequence were concurrently monitored using electroencephalography (EEG). The participants performed different tasks, attending to either the rhythmic speech sequence or a distractor, which was another speech stream or a nonlinguistic auditory/visual stimulus. Attention to speech, but not a lexical-meaning-related task, was required for reliable neural tracking of words, even when the distractor was a nonlinguistic stimulus presented cross-modally. Neural tracking of syllables, however, was reliably observed in all tested conditions. These results strongly suggest that neural encoding of individual auditory events (i.e., syllables) is automatic, while knowledge-based construction of temporal chunks (i.e., words) crucially relies on top-down attention. SIGNIFICANCE STATEMENT Why we cannot understand speech when not paying attention is an old question in psychology and cognitive neuroscience. Speech processing is a complex process that involves multiple stages, e.g., hearing and analyzing the speech sound, recognizing words, and combining words into phrases and sentences. The current study investigates which speech-processing stage is blocked when we do not listen carefully. We show that the brain can reliably encode syllables, basic units of speech sounds, even when we do not pay attention. Nevertheless, when distracted, the brain cannot group syllables into multisyllabic words, which are basic units for speech meaning. Therefore, the process of converting speech sound into meaning crucially relies on attention. Copyright © 2018 the authors 0270-6474/18/381178-11$15.00/0.

  7. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  8. Enhanced Passive and Active Processing of Syllables in Musician Children

    ERIC Educational Resources Information Center

    Chobert, Julie; Marie, Celine; Francois, Clement; Schon, Daniele; Besson, Mireille

    2011-01-01

    The aim of this study was to examine the influence of musical expertise in 9-year-old children on passive (as reflected by MMN) and active (as reflected by discrimination accuracy) processing of speech sounds. Musician and nonmusician children were presented with a sequence of syllables that included standards and deviants in vowel frequency,…

  9. Production of Syllable Stress in Speakers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Bianchi, Nancy; Augustyn, Amy; Klin, Ami; Volkmar, Fred R.

    2008-01-01

    This paper reports a study of the ability to reproduce stress in a nonsense syllable imitation task by adolescent speakers with autism spectrum disorders (ASD), as compared to typically developing (TD) age-mates. Results are reported for both raters' judgments of the subjects' stress production, as well as acoustic measures of pitch range and…

  10. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English.

    PubMed

    Banzina, Elina; Dilley, Laura C; Hewitt, Lynne E

    2016-08-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found, likely due to a transfer of native phonological features. Next, a cross-modal phonological priming technique combined with a lexical decision task assessed the effect of inaccurate SS and UU syllable productions on native American English listeners' speech processing. Inaccurate UU vowels led to significant inhibition of lexical access, while reduced SS vowels revealed less interference. The results have implications for understanding the role of SS and UU syllables for word recognition and English pronunciation instruction.

  11. A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series

    PubMed Central

    MARTINEZ, Josue G.; BOHN, Kirsten M.; CARROLL, Raymond J.

    2013-01-01

    We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible. PMID:23997376

  12. A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series.

    PubMed

    Martinez, Josue G; Bohn, Kirsten M; Carroll, Raymond J; Morris, Jeffrey S

    2013-06-01

    We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible.

  13. Rhythmic syllable-related activity in a songbird motor thalamic nucleus necessary for learned vocalizations

    PubMed Central

    Danish, Husain H.; Aronov, Dmitriy; Fee, Michale S.

    2017-01-01

    Birdsong is a complex behavior that exhibits hierarchical organization. While the representation of singing behavior and its hierarchical organization has been studied in some detail in avian cortical premotor circuits, our understanding of the role of the thalamus in adult birdsong is incomplete. Using a combination of behavioral and electrophysiological studies, we seek to expand on earlier work showing that the thalamic nucleus Uvaeformis (Uva) is necessary for the production of stereotyped, adult song in zebra finch (Taeniopygia guttata). We confirm that complete bilateral lesions of Uva abolish singing in the ‘directed’ social context, but find that in the ‘undirected’ social context, such lesions result in highly variable vocalizations similar to early babbling song in juvenile birds. Recordings of neural activity in Uva reveal strong syllable-related modulation, maximally active prior to syllable onsets and minimally active prior to syllable offsets. Furthermore, both song and Uva activity exhibit a pronounced coherent modulation at 10Hz—a pattern observed in downstream premotor areas in adult and, even more prominently, in juvenile birds. These findings are broadly consistent with the idea that Uva is critical in the sequential activation of behavioral modules in HVC. PMID:28617829

  14. Factors that enhance English-speaking speech-language pathologists' transcription of Cantonese-speaking children's consonants.

    PubMed

    Lockart, Rebekah; McLeod, Sharynne

    2013-08-01

    To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Task 1 (baseline) involved transcribing English words. In Task 2, students transcribed 25 words spoken by a Cantonese adult. An average of 59.1% consonants was transcribed correctly (72.9% when Cantonese-English transfer patterns were allowed). There was higher accuracy on shared English and Cantonese syllable-initial consonants /m,n,f,s,h,j,w,l/ and syllable-final consonants. In Task 3, students identified consonant errors and transcribed 100 words spoken by Cantonese-speaking children under 4 additive conditions: (1) baseline, (2) +adult model, (3) +information about Cantonese phonology, and (4) all variables (2 and 3 were counterbalanced). There was a significant improvement in the students' identification and transcription scores for conditions 2, 3, and 4, with a moderate effect size. Increased skill was not based on listeners' proficiency in speaking another language, perceived transcription skill, musicality, or confidence with multilingual clients. Speech-language pathology students, with no exposure to or specific training in Cantonese, have some skills to identify errors and transcribe Cantonese. Provision of a Cantonese-adult model and information about Cantonese phonology increased students' accuracy in transcribing Cantonese speech.

  15. Language experience changes subsequent learning

    PubMed Central

    Onnis, Luca; Thiessen, Erik

    2013-01-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. PMID:23200510

  16. Pinyin and English Invented Spelling in Chinese-Speaking Students Who Speak English as a Second Language.

    PubMed

    Ding, Yi; Liu, Ru-De; McBride, Catherine A; Fan, Chung-Hau; Xu, Le; Wang, Jia

    2018-05-07

    This study examined pinyin (the official phonetic system that transcribes the lexical tones and pronunciation of Chinese characters) invented spelling and English invented spelling in 72 Mandarin-speaking 6th graders who learned English as their second language. The pinyin invented spelling task measured segmental-level awareness including syllable and phoneme awareness, and suprasegmental-level awareness including lexical tones and tone sandhi in Chinese Mandarin. The English invented spelling task manipulated segmental-level awareness including syllable awareness and phoneme awareness, and suprasegmental-level awareness including word stress. This pinyin task outperformed a traditional phonological awareness task that only measured segmental-level awareness and may have optimal utility to measure unique phonological and linguistic features in Chinese reading. The pinyin invented spelling uniquely explained variance in Chinese conventional spelling and word reading in both languages. The English invented spelling uniquely explained variance in conventional spelling and word reading in both languages. Our findings appear to support the role of phonological activation in Chinese reading. Our experimental linguistic manipulations altered the phonological awareness item difficulties.

  17. Vowel space development in a child acquiring English and Spanish from birth

    NASA Astrophysics Data System (ADS)

    Andruski, Jean; Kim, Sahyang; Nathan, Geoffrey; Casielles, Eugenia; Work, Richard

    2005-04-01

    To date, research on bilingual first language acquisition has tended to focus on the development of higher levels of language, with relatively few analyses of the acoustic characteristics of bilingual infants' and childrens' speech. Since monolingual infants begin to show perceptual divisions of vowel space that resemble adult native speakers divisions by about 6 months of age [Kuhl et al., Science 255, 606-608 (1992)], bilingual childrens' vowel production may provide evidence of their awareness of language differences relatively early during language development. This paper will examine the development of vowel categories in a child whose mother is a native speaker of Castilian Spanish, and whose father is a native speaker of American English. Each parent speaks to the child only in her/his native language. For this study, recordings made at the ages of 2;5 and 2;10 were analyzed and F1-F2 measurements were made of vowels from the stressed syllables of content words. The development of vowel space is compared across ages within each language, and across languages at each age. In addition, the child's productions are compared with the mother's and father's vocalic productions, which provide the predominant input in Spanish and English respectively.

  18. Language related differences of the sustained response evoked by natural speech sounds.

    PubMed

    Fan, Christina Siu-Dschu; Zhu, Xingyu; Dosch, Hans Günter; von Stutterheim, Christiane; Rupp, André

    2017-01-01

    In tonal languages, such as Mandarin Chinese, the pitch contour of vowels discriminates lexical meaning, which is not the case in non-tonal languages such as German. Recent data provide evidence that pitch processing is influenced by language experience. However, there are still many open questions concerning the representation of such phonological and language-related differences at the level of the auditory cortex (AC). Using magnetoencephalography (MEG), we recorded transient and sustained auditory evoked fields (AEF) in native Chinese and German speakers to investigate language related phonological and semantic aspects in the processing of acoustic stimuli. AEF were elicited by spoken meaningful and meaningless syllables, by vowels, and by a French horn tone. Speech sounds were recorded from a native speaker and showed frequency-modulations according to the pitch-contours of Mandarin. The sustained field (SF) evoked by natural speech signals was significantly larger for Chinese than for German listeners. In contrast, the SF elicited by a horn tone was not significantly different between groups. Furthermore, the SF of Chinese subjects was larger when evoked by meaningful syllables compared to meaningless ones, but there was no significant difference regarding whether vowels were part of the Chinese phonological system or not. Moreover, the N100m gave subtle but clear evidence that for Chinese listeners other factors than purely physical properties play a role in processing meaningful signals. These findings show that the N100 and the SF generated in Heschl's gyrus are influenced by language experience, which suggests that AC activity related to specific pitch contours of vowels is influenced in a top-down fashion by higher, language related areas. Such interactions are in line with anatomical findings and neuroimaging data, as well as with the dual-stream model of language of Hickok and Poeppel that highlights the close and reciprocal interaction between

  19. Real-Time Language Processing in School-Age Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Montgomery, James W.

    2006-01-01

    Background:School-age children with specific language impairment (SLI) exhibit slower real-time (i.e. immediate) language processing relative to same-age peers and younger, language-matched peers. Results of the few studies that have been done seem to indicate that the slower language processing of children with SLI is due to inefficient…

  20. Language Acquisition and Language Revitalization

    ERIC Educational Resources Information Center

    O'Grady, William; Hattori, Ryoko

    2016-01-01

    Intergenerational transmission, the ultimate goal of language revitalization efforts, can only be achieved by (re)establishing the conditions under which an imperiled language can be acquired by the community's children. This paper presents a tutorial survey of several key points relating to language acquisition and maintenance in children,…

  1. Interaction between Phonemic Abilities and Syllable Congruency Effect in Young Readers

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Mathey, Stephanie

    2013-01-01

    This study investigated whether and to what extent phonemic abilities of young readers (Grade 5) influence syllabic effects in reading. More precisely, the syllable congruency effect was tested in the lexical decision task combined with masked priming in eleven-year-old children. Target words were preceded by a pseudo-word prime sharing the first…

  2. A Small-Scale, Feasibility Study of Academic Language Time in Primary Grade Language Arts

    ERIC Educational Resources Information Center

    Roskos, Kathleen A.; Zuzolo, Nicole; Primm, Ashley

    2017-01-01

    A small-scale feasibility study was conducted to explore the implementation of academic language time (ALT) in primary grade classrooms with and without access to digital devices. Academic language time is a structural change that dedicates a portion of language arts instructional time to direct vocabulary instruction using evidence-based…

  3. Language Emergence

    PubMed Central

    Brentari, Diane; Goldin-Meadow, Susan

    2017-01-01

    Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.* PMID:29034268

  4. A Daily Oscillation in the Fundamental Frequency and Amplitude of Harmonic Syllables of Zebra Finch Song

    PubMed Central

    Wood, William E.; Osseward, Peter J.; Roseberry, Thomas K.; Perkel, David J.

    2013-01-01

    Complex motor skills are more difficult to perform at certain points in the day (for example, shortly after waking), but the daily trajectory of motor-skill error is more difficult to predict. By undertaking a quantitative analysis of the fundamental frequency (FF) and amplitude of hundreds of zebra finch syllables per animal per day, we find that zebra finch song follows a previously undescribed daily oscillation. The FF and amplitude of harmonic syllables rises across the morning, reaching a peak near mid-day, and then falls again in the late afternoon until sleep. This oscillation, although somewhat variable, is consistent across days and across animals and does not require serotonin, as animals with serotonergic lesions maintained daily oscillations. We hypothesize that this oscillation is driven by underlying physiological factors which could be shared with other taxa. Song production in zebra finches is a model system for studying complex learned behavior because of the ease of gathering comprehensive behavioral data and the tractability of the underlying neural circuitry. The daily oscillation that we describe promises to reveal new insights into how time of day affects the ability to accomplish a variety of complex learned motor skills. PMID:24312654

  5. Language experience changes subsequent learning.

    PubMed

    Onnis, Luca; Thiessen, Erik

    2013-02-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. The Transition from the Core Vowels to the Following Segments in Japanese Children Who Stutter: The Second, Third and Fourth Syllables

    ERIC Educational Resources Information Center

    Matsumoto-Shimamori, Sachiyo; Ito, Tomohiko; Fukuda, Suzy E.; Fukuda, Shinji

    2011-01-01

    Shimamori and Ito (2007, Syllable weight and phonological encoding in Japanese children who stutter. "Japanese Journal of Special Education", 44, 451-462; 2008, Syllable weight and frequency of stuttering: Comparison between children who stutter with and without a family history of stuttering. "Japanese Journal of Special Education", 45, 437-445;…

  7. Real-Time MENTAT programming language and architecture

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.

    1989-01-01

    Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.

  8. Effects of syllable-initial voicing and speaking rate on the temporal characteristics of monosyllabic words.

    PubMed

    Allen, J S; Miller, J L

    1999-10-01

    Two speech production experiments tested the validity of the traditional method of creating voice-onset-time (VOT) continua for perceptual studies in which the systematic increase in VOT across the continuum is accompanied by a concomitant decrease in the duration of the following vowel. In experiment 1, segmental durations were measured for matched monosyllabic words beginning with either a voiced stop (e.g., big, duck, gap) or a voiceless stop (e.g., pig, tuck, cap). Results from four talkers showed that the change from voiced to voiceless stop produced not only an increase in VOT, but also a decrease in vowel duration. However, the decrease in vowel duration was consistently less than the increase in VOT. In experiment 2, results from four new talkers replicated these findings at two rates of speech, as well as highlighted the contrasting temporal effects on vowel duration of an increase in VOT due to a change in syllable-initial voicing versus a change in speaking rate. It was concluded that the traditional method of creating VOT continua for perceptual experiments, although not perfect, approximates natural speech by capturing the basic trade-off between VOT and vowel duration in syllable-initial voiced versus voiceless stop consonants.

  9. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    ERIC Educational Resources Information Center

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  10. Psycholinguistics: a cross-language perspective.

    PubMed

    Bates, E; Devescovi, A; Wulfeck, B

    2001-01-01

    Cross-linguistic studies are essential to the identification of universal processes in language development, language use, and language breakdown. Comparative studies in all three areas are reviewed, demonstrating powerful differences across languages in the order in which specific structures are acquired by children, the sparing and impairment of those structures in aphasic patients, and the structures that normal adults rely upon most heavily in real-time word and sentence processing. It is proposed that these differences reflect a cost-benefit trade-off among universal mechanisms for learning and processing (perception, attention, motor planning, memory) that are critical for language, but are not unique to language.

  11. Group Time: Building Language at Group Time

    ERIC Educational Resources Information Center

    Church, Ellen Booth

    2004-01-01

    This article features energizing and surprising activities for children at group time. In the drawing activity, children are asked to give instructions on how to draw a picture using vocabulary and descriptive language. In the mailbox activity, children will be surprised to discover that they have mail at group time. Mailboxes can be used for…

  12. Universal and Language-Specific Constraints on Phonemic Awareness: Evidence from Russian-Hebrew Bilingual Children

    ERIC Educational Resources Information Center

    Saiegh-Haddad, Elinor; Kogan, Nadya; Walters, Joel

    2010-01-01

    The study tested phonemic awareness in the two languages of Russian (L1)-Hebrew (L2) sequential bilingual children (N = 20) using phoneme deletion tasks where the phoneme to be deleted occurred word initial, word final, as a singleton, or part of a cluster, in long and short words and stressed and unstressed syllables. The experiments were…

  13. For a Psycholinguistic Model of Handwriting Production: Testing the Syllable-Bigram Controversy

    ERIC Educational Resources Information Center

    Kandel, Sonia; Peereman, Ronald; Grosjacques, Geraldine; Fayol, Michel

    2011-01-01

    This study examined the theoretical controversy on the impact of syllables and bigrams in handwriting production. French children and adults wrote words on a digitizer so that we could collect data on the local, online processing of handwriting production. The words differed in the position of the lowest frequency bigram. In one condition, it…

  14. Language Sampling for Preschoolers With Severe Speech Impairments

    PubMed Central

    Ragsdale, Jamie; Bustos, Aimee

    2016-01-01

    Purpose The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Method Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Results Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur–Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Conclusion Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information. PMID:27552110

  15. Language Sampling for Preschoolers With Severe Speech Impairments.

    PubMed

    Binger, Cathy; Ragsdale, Jamie; Bustos, Aimee

    2016-11-01

    The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information.

  16. Heritage Language Learners' Perceptions of Acquiring and Maintaining the Spanish Language

    ERIC Educational Resources Information Center

    Torres, Kelly Moore; Turner, Jeannine E.

    2017-01-01

    The number of heritage language learners in American universities is increasing each year. Similar to monolingual students, this subgroup of learners is required to complete university level "foreign language" coursework, which can be a source of anxiety. Particularly, this subgroup of learners may experience high levels of anxiety due…

  17. Teaching Word Stress to Turkish EFL (English as a Foreign Language) Learners through Internet-Based Video Lessons

    ERIC Educational Resources Information Center

    Hismanoglu, Murat

    2012-01-01

    The purpose of this study is to elicit problem causing word stress patterns for Turkish EFL (English as a foreign language) learners and investigate whether Internet-based pronunciation lesson is superior to traditional pronunciation lesson in terms of enhancing Turkish EFL learners' accurate production of stressed syllables in English words. A…

  18. Development of precursors to speech in infants exposed to two languages.

    PubMed

    Oller, D K; Eilers, R E; Urbano, R; Cobo-Lewis, A B

    1997-06-01

    The study of bilingualism has often focused on two contradictory possibilities: that the learning of two languages may produce deficits of performance in each language by comparison with performance of monolingual individuals, or on the contrary, that the learning of two languages may produce linguistic or cognitive advantages with regard to the monolingual learning experience. The work reported here addressed the possibility that the very early bilingual experience of infancy may affect the unfolding of vocal precursors to speech. The results of longitudinal research with 73 infants aged 0;4 to 1;6 in monolingual and bilingual environments provided no support for either a bilingual deficit hypothesis nor for its opposite, a bilingual advantage hypothesis. Infants reared in bilingual and monolingual environments manifested similar ages of onset for canonical babbling (production of well-formed syllables), an event known to be fundamentally related to speech development. Further, quantitative measures of vocal performance (proportion of usage of well-formed syllables and vowel-like sounds) showed additional similarities between monolingual and bilingual infants. The similarities applied to infants of middle and low socio-economic status and to infants that were born at term or prematurely. The results suggest that vocal development in the first year of life is robust with respect to conditions of rearing. The biological foundations of speech appear to be such as to resist modifications in the natural schedule of vocal development.

  19. Spatiotemporal frequency characteristics of cerebral oscillations during the perception of fundamental frequency contour changes in one-syllable intonation.

    PubMed

    Ueno, Sanae; Okumura, Eiichi; Remijn, Gerard B; Yoshimura, Yuko; Kikuchi, Mitsuru; Shitamichi, Kiyomi; Nagao, Kikuko; Mochiduki, Masayuki; Haruta, Yasuhiro; Hayashi, Norio; Munesue, Toshio; Tsubokawa, Tsunehisa; Oi, Manabu; Nakatani, Hideo; Higashida, Haruhiro; Minabe, Yoshio

    2012-05-02

    Accurate perception of fundamental frequency (F0) contour changes in the human voice is important for understanding a speaker's intonation, and consequently also his/her attitude. In this study, we investigated the neural processes involved in the perception of F0 contour changes in the Japanese one-syllable interjection "ne" in 21 native-Japanese listeners. A passive oddball paradigm was applied in which "ne" with a high falling F0 contour, used when urging a reaction from the listener, was randomly presented as a rare deviant among a frequent "ne" syllable with a flat F0 contour (i.e., meaningless intonation). We applied an adaptive spatial filtering method to the neuromagnetic time course recorded by whole-head magnetoencephalography (MEG) and estimated the spatiotemporal frequency dynamics of event-related cerebral oscillatory changes in the oddball paradigm. Our results demonstrated a significant elevation of beta band event-related desynchronization (ERD) in the right temporal and frontal areas, in time windows from 100 to 300 and from 300 to 500 ms after the onset of deviant stimuli (high falling F0 contour). This is the first study to reveal detailed spatiotemporal frequency characteristics of cerebral oscillations during the perception of intonational (not lexical) F0 contour changes in the human voice. The results further confirmed that the right hemisphere is associated with perception of intonational F0 contour information in the human voice, especially in early time windows. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Can non-interactive language input benefit young second-language learners?

    PubMed

    Au, Terry Kit-Fong; Chan, Winnie Wailan; Cheng, Liao; Siegel, Linda S; Tso, Ricky Van Yip

    2015-03-01

    To fully acquire a language, especially its phonology, children need linguistic input from native speakers early on. When interaction with native speakers is not always possible - e.g. for children learning a second language that is not the societal language - audios are commonly used as an affordable substitute. But does such non-interactive input work? Two experiments evaluated the usefulness of audio storybooks in acquiring a more native-like second-language accent. Young children, first- and second-graders in Hong Kong whose native language was Cantonese Chinese, were given take-home listening assignments in a second language, either English or Putonghua Chinese. Accent ratings of the children's story reading revealed measurable benefits of non-interactive input from native speakers. The benefits were far more robust for Putonghua than English. Implications for second-language accent acquisition are discussed.

  1. A cross-language study of perception of lexical stress in English.

    PubMed

    Yu, Vickie Y; Andruski, Jean E

    2010-08-01

    This study investigates the question of whether language background affects the perception of lexical stress in English. Thirty native English speakers and 30 native Chinese learners of English participated in a stressed-syllable identification task and a discrimination task involving three types of stimuli (real words/pseudowords/hums). The results show that both language groups were able to identify and discriminate stress patterns. Lexical and segmental information affected the English and Chinese speakers in varying degrees. English and Chinese speakers showed different response patterns to trochaic vs. iambic stress across the three types of stimuli. An acoustic analysis revealed that two language groups used different acoustic cues to process lexical stress. The findings suggest that the different degrees of lexical and segmental effects can be explained by language background, which in turn supports the hypothesis that language background affects the perception of lexical stress in English.

  2. Length Effects Turn Out To Be Syllable Structure Effects: Response to Roelofs (2002).

    ERIC Educational Resources Information Center

    Santiago, Julio; MacKay, Donald G.; Palma, Alfonso

    2002-01-01

    Responds to a commentary written in response a research study conducted by the author (Santiago et al., 2000) that suggests that a reanalysis of the data on syllable structure effects that takes word length into account leads to a conclusion that is the opposite of what the study found. (Author/VWL)

  3. Structure of Preschool Phonological Sensitivity: Overlapping Sensitivity to Rhyme, Words, Syllables, and Phonemes.

    ERIC Educational Resources Information Center

    Anthony, Jason L.; Lonigan, Christopher J.; Burgess, Stephen R.; Driscoll, Kimberly; Phillips, Beth M.; Cantor, Brenlee G.

    2002-01-01

    This study examined relations among sensitivity to words, syllables, rhymes, and phonemes in older and younger preschoolers. Confirmatory factor analyses found that a one-factor model best explained the date from both groups of children. Only variance common to all phonological sensitivity skills was related to print knowledge and rudimentary…

  4. Absolute and Relative Reliability of Percentage of Syllables Stuttered and Severity Rating Scales

    ERIC Educational Resources Information Center

    Karimi, Hamid; O'Brian, Sue; Onslow, Mark; Jones, Mark

    2014-01-01

    Purpose: Percentage of syllables stuttered (%SS) and severity rating (SR) scales are measures in common use to quantify stuttering severity and its changes during basic and clinical research conditions. However, their reliability has not been assessed with indices measuring both relative and absolute reliability. This study was designed to provide…

  5. Influence of Initial and Final Consonants on Vowel Duration in CVC Syllables.

    ERIC Educational Resources Information Center

    Naeser, Margaret A.

    This study investigates the influence of initial and final consonants /p, b, s, z/ on the duration of four vowels /I, i, u, ae/ in 64 CVC syllables uttered by eight speakers of English from the same dialect area. The CVC stimuli were presented to the subjects in a frame sentence from a master tape. Subjects repeated each sentence immediately after…

  6. Feet and syllables in elephants and missiles: a reappraisal.

    PubMed

    Zonneveld, Wim; van der Pas, Brigit; de Bree, Elise

    2007-01-01

    Using data from a case study presented in Chiat (1989), Marshall and Chiat (2003) compare two different approaches to account for the realization of intervocalic consonants in child phonology: "coda capture theory" and the "foot domain account". They argue in favour of the latter account. In this note, we present a reappraisal of this argument using the same data. We conclude that acceptance of the foot domain account, in the specific way developed by the authors, is unmotivated for both theoretical and empirical reasons. We maintain that syllable-based coda capture is (still) the better approach to account for the relevant facts.

  7. The nature of the language input affects brain activation during learning from a natural language

    PubMed Central

    Plante, Elena; Patterson, Dianne; Gómez, Rebecca; Almryde, Kyle R.; White, Milo G.; Asbjørnsen, Arve E.

    2015-01-01

    Artificial language studies have demonstrated that learners are able to segment individual word-like units from running speech using the transitional probability information. However, this skill has rarely been examined in the context of natural languages, where stimulus parameters can be quite different. In this study, two groups of English-speaking learners were exposed to Norwegian sentences over the course of three fMRI scans. One group was provided with input in which transitional probabilities predicted the presence of target words in the sentences. This group quickly learned to identify the target words and fMRI data revealed an extensive and highly dynamic learning network. These results were markedly different from activation seen for a second group of participants. This group was provided with highly similar input that was modified so that word learning based on syllable co-occurrences was not possible. These participants showed a much more restricted network. The results demonstrate that the nature of the input strongly influenced the nature of the network that learners employ to learn the properties of words in a natural language. PMID:26257471

  8. Clinician Percent Syllables Stuttered, Clinician Severity Ratings and Speaker Severity Ratings: Are They Interchangeable?

    ERIC Educational Resources Information Center

    Karimi, Hamid; Jones, Mark; O'Brian, Sue; Onslow, Mark

    2014-01-01

    Background: At present, percent syllables stuttered (%SS) is the gold standard outcome measure for behavioural stuttering treatment research. However, ordinal severity rating (SR) procedures have some inherent advantages over that method. Aims: To establish the relationship between Clinician %SS, Clinician SR and self-reported Speaker SR. To…

  9. Telehealth Delivery of Rapid Syllable Transitions (ReST) Treatment for Childhood Apraxia of Speech

    ERIC Educational Resources Information Center

    Thomas, Donna C.; McCabe, Patricia; Ballard, Kirrie J.; Lincoln, Michelle

    2016-01-01

    Background: Rapid Syllable Transitions (ReST) treatment uses pseudo-word targets with varying lexical stress to target simultaneously articulation, prosodic accuracy and coarticulatory transitions in childhood apraxia of speech (CAS). The treatment is efficacious for the acquisition of imitated pseudo-words, and generalization of skill to…

  10. Syllable-constituent perception by hearing-aid users: Common factors in quiet and noise

    PubMed Central

    Miller, James D.; Watson, Charles S.; Leek, Marjorie R.; Dubno, Judy R.; Wark, David J.; Souza, Pamela E.; Gordon-Salant, Sandra; Ahlstrom, Jayne B.

    2017-01-01

    The abilities of 59 adult hearing-aid users to hear phonetic details were assessed by measuring their abilities to identify syllable constituents in quiet and in differing levels of noise (12-talker babble) while wearing their aids. The set of sounds consisted of 109 frequently occurring syllable constituents (45 onsets, 28 nuclei, and 36 codas) spoken in varied phonetic contexts by eight talkers. In nominal quiet, a speech-to-noise ratio (SNR) of 40 dB, scores of individual listeners ranged from about 23% to 85% correct. Averaged over the range of SNRs commonly encountered in noisy situations, scores of individual listeners ranged from about 10% to 71% correct. The scores in quiet and in noise were very strongly correlated, R = 0.96. This high correlation implies that common factors play primary roles in the perception of phonetic details in quiet and in noise. Otherwise said, hearing-aid users' problems perceiving phonetic details in noise appear to be tied to their problems perceiving phonetic details in quiet and vice versa. PMID:28464618

  11. Developmental weighting shifts for noise components of fricative-vowel syllables.

    PubMed

    Nittrouer, S; Miller, M E

    1997-07-01

    Previous studies have convincingly shown that the weight assigned to vocalic formant transitions in decisions of fricative identity for fricative-vowel syllables decreases with development. Although these same studies suggested a developmental increase in the weight assigned to the noise spectrum, the role of the aperiodic-noise portions of the signals in these fricative decisions have not been as well-studied. The purpose of these experiments was to examine more closely developmental shifts in the weight assigned to the aperiodic-noise components of the signals in decisions of syllable-initial fricative identity. Two experiments used noises varying along continua from a clear /s/ percept to a clear /[symbol: see text]/ percept. In experiment 1, these noises were created by combining /s/ and /[symbol: see text]/ noises produced by a human vocal tract at different amplitude ratios, a process that resulted in stimuli differing primarily in the amplitude of a relatively low-frequency (roughly 2.2-kHz) peak. In experiment 2, noises that varied only in the amplitude of a similar low-frequency peak were created with a software synthesizer. Both experiments used synthetic /a/ and /u/ portions, and efforts were made to minimize possible contributions of vocalic formant transitions to fricative labeling. Children and adults labeled the resulting stimuli as /s/ vowel or /[symbol: see text]/ vowel. Combined results of the two experiments showed that children's responses were less influenced than those of adults by the amplitude of the low-frequency peak of fricative noises.

  12. Recombinative generalization of within-syllable units in nonreading adults with mental retardation.

    PubMed

    Saunders, Kathryn J; O'Donnell, Jennifer; Vaidya, Manish; Williams, Dean C

    2003-01-01

    Two adults with mental retardation demonstrated the recombination of within-syllable units (onsets and rimes) using a spoken-to-printed-word matching-to-sample (MTS) procedure. Further testing with 1 participant showed comprehension of the printed words. Printed-word naming was minimal before, but greater after, comprehension tests. The findings suggest that these procedures hold promise for further basic and applied analyses of word-attack skills.

  13. The Whorfian time warp: Representing duration through the language hourglass.

    PubMed

    Bylund, Emanuel; Athanasopoulos, Panos

    2017-07-01

    How do humans construct their mental representations of the passage of time? The universalist account claims that abstract concepts like time are universal across humans. In contrast, the linguistic relativity hypothesis holds that speakers of different languages represent duration differently. The precise impact of language on duration representation is, however, unknown. Here, we show that language can have a powerful role in transforming humans' psychophysical experience of time. Contrary to the universalist account, we found language-specific interference in a duration reproduction task, where stimulus duration conflicted with its physical growth. When reproducing duration, Swedish speakers were misled by stimulus length, and Spanish speakers were misled by stimulus size/quantity. These patterns conform to preferred expressions of duration magnitude in these languages (Swedish: long/short time; Spanish: much/small time). Critically, Spanish-Swedish bilinguals performing the task in both languages showed different interference depending on language context. Such shifting behavior within the same individual reveals hitherto undocumented levels of flexibility in time representation. Finally, contrary to the linguistic relativity hypothesis, language interference was confined to difficult discriminations (i.e., when stimuli varied only subtly in duration and growth), and was eliminated when linguistic cues were removed from the task. These results reveal the malleable nature of human time representation as part of a highly adaptive information processing system. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Final Syllable Lengthening (FSL) in infant vocalizations.

    PubMed

    Nathani, Suneeti; Oller, D Kimbrough; Cobo-Lewis, Alan B

    2003-02-01

    Final Syllable Lengthening (FSL) has been extensively examined in infant vocalizations in order to determine whether its basis is biological or learned. Findings suggest there may be a U-shaped developmental trajectory for FSL. The present study sought to verify this pattern and to determine whether vocal maturity and deafness influence FSL. Eight normally hearing infants, aged 0;3 to 1;0, and eight deaf infants, aged 0;8 to 4;0, were examined at three levels of prelinguistic vocal development: precanonical, canonical, and postcanonical. FSL was found at all three levels suggesting a biological basis for this phenomenon. Individual variability was, however, considerable. Reduction in the magnitude of FSL across the three sessions provided some support for a downward trend for FSL in infancy. Findings further indicated that auditory deprivation can significantly affect temporal aspects of infant speech production.

  15. Geographically pervasive effects of urban noise on frequency and syllable rate of songs and calls in silvereyes (Zosterops lateralis).

    PubMed

    Potvin, Dominique A; Parris, Kirsten M; Mulder, Raoul A

    2011-08-22

    Recent studies in the Northern Hemisphere have shown that songbirds living in noisy urban environments sing at higher frequencies than their rural counterparts. However, several aspects of this phenomenon remain poorly understood. These include the geographical scale over which such patterns occur (most studies have compared local populations), and whether they involve phenotypic plasticity or microevolutionary change. We conducted a field study of silvereye (Zosterops lateralis) vocalizations over more than 1 million km(2) of urban and rural south-eastern Australia, and compared possible effects of urban noise on songs (which are learned) and contact calls (which are innate). Across 14 paired urban and rural populations, silvereyes consistently sang both songs and contact calls at higher frequencies in urban environments. Syllable rate (syllables per second) decreased in urban environments, consistent with the hypothesis that reflective structures degrade song and encourage longer intervals between syllables. This comprehensive study is, to our knowledge, the first to demonstrate varied adaptations of urban bird vocalizations over a vast geographical area, and to provide insight into the mechanism responsible for these changes.

  16. Studies in the Phonology of Asian Languages VI: Complex Syllable Nuclei in Vietnamese.

    ERIC Educational Resources Information Center

    Han, Mieko S.

    This study is the sixth in the series "Studies in the Phonology of Asian Languages." A phonetic and phonemic analysis of the three complex nuclei of Vietnames (Hanoi dialect), spelled (1) ye-, -ie-, -ia, (2) -u'o'-, -u'a, and (3) -uo-, -ua, was carried out using the sound spectrograph. The relative domains of the target qualities of the…

  17. A Quantitative Causal-Comparative Nonexperimental Research Study of English Language Learner and Non-English Language Learner Students' Oral Reading Fluency Growth

    ERIC Educational Resources Information Center

    O'Loughlin, Tricia Ann

    2017-01-01

    Beginning learners of English progress through the same stages to acquire language. However, the length of time each student spends at a particular stage may vary greatly. Under the current educational policies, ELL students are expected to participate in the general education curriculum while developing their proficiency in the English language.…

  18. Intercultural Language Educators for an Intercultural World: Action upon Reflection

    ERIC Educational Resources Information Center

    Siqueira, Sávio

    2017-01-01

    Bearing in mind that learning a new language is much more than acquiring a new code, but a new way of being in the world, the aim of the article is to briefly raise and discuss relevant issues relating to language teacher education in these contemporary times, especially in the area of English Language Teaching (ELT). Emphasis is placed on the…

  19. Testing the Language of German Cerebral Palsy Patients with Right Hemispheric Language Organization after Early Left Hemispheric Damage

    ERIC Educational Resources Information Center

    Schwilling, Eleonore; Krageloh-Mann, Ingeborg; Konietzko, Andreas; Winkler, Susanne; Lidzba, Karen

    2012-01-01

    Language functions are generally represented in the left cerebral hemisphere. After early (prenatally acquired or perinatally acquired) left hemispheric brain damage language functions may be salvaged by reorganization into the right hemisphere. This is different from brain lesions acquired in adulthood which normally lead to aphasia. Right…

  20. Acoustic Correlates of Inflectional Morphology in the Speech of Children with Specific Language Impairment and Their Typically Developing Peers

    ERIC Educational Resources Information Center

    Owen, Amanda J.; Goffman, Lisa

    2007-01-01

    The development of the use of the third-person singular -s in open syllable verbs in children with specific language impairment (SLI) and their typically developing peers was examined. Verbs that included overt productions of the third-person singular -s morpheme (e.g. "Bobby plays ball everyday;" "Bear laughs when mommy buys…

  1. The Basis for Language Acquisition: Congenitally Deaf Infants Discriminate Vowel Length in the First Months after Cochlear Implantation.

    PubMed

    Vavatzanidis, Niki Katerina; Mürbe, Dirk; Friederici, Angela; Hahne, Anja

    2015-12-01

    One main incentive for supplying hearing impaired children with a cochlear implant is the prospect of oral language acquisition. Only scarce knowledge exists, however, of what congenitally deaf children actually perceive when receiving their first auditory input, and specifically what speech-relevant features they are able to extract from the new modality. We therefore presented congenitally deaf infants and young children implanted before the age of 4 years with an oddball paradigm of long and short vowel variants of the syllable /ba/. We measured the EEG in regular intervals to study their discriminative ability starting with the first activation of the implant up to 8 months later. We were thus able to time-track the emerging ability to differentiate one of the most basic linguistic features that bears semantic differentiation and helps in word segmentation, namely, vowel length. Results show that already 2 months after the first auditory input, but not directly after implant activation, these early implanted children differentiate between long and short syllables. Surprisingly, after only 4 months of hearing experience, the ERPs have reached the same properties as those of the normal hearing control group, demonstrating the plasticity of the brain with respect to the new modality. We thus show that a simple but linguistically highly relevant feature such as vowel length reaches age-appropriate electrophysiological levels as fast as 4 months after the first acoustic stimulation, providing an important basis for further language acquisition.

  2. Acquisition of English word stress patterns in early and late bilinguals

    NASA Astrophysics Data System (ADS)

    Guion, Susan G.

    2004-05-01

    Given early acquisition of prosodic knowledge as demonstrated by infants' sensitivity to native language accentual patterns, the question of whether learners can acquire new prosodic patterns across the life span arises. Acquisition of English stress by early and late Spanish-English and Korean-English bilinguals was investigated. In a production task, two-syllable nonwords were produced in noun and verb sentence frames. In a perception task, preference for first or last syllable stress on the nonwords was indicated. Also, real words that were phonologically similar to the nonwords were collected. Logistic regression analyses and ANOVAs were conducted to determine the effect of three factors (syllable structure, lexical class, and stress patterns of phonologically similar words) on the production and perception responses. In all three groups, stress patterns of phonologically similar real words predicted stress on nonwords. For the two other factors, early bilinguals patterned similarly to the native-English participants. Late Spanish-English bilinguals demonstrated less learning of stress patterns based on syllabic structure, and late Korean-English bilinguals demonstrated less learning of stress patterns based on lexical class than native-English speakers. Thus, compared to native speakers, late bilinguals' ability to abstract stress patterns is reduced and affected by the first language. [Work supported by NIH.

  3. Language-Independent and Language-Specific Aspects of Early Literacy: An Evaluation of the Common Underlying Proficiency Model

    ERIC Educational Resources Information Center

    Goodrich, J. Marc; Lonigan, Christopher J.

    2017-01-01

    According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying…

  4. Multilingualism and fMRI: Longitudinal Study of Second Language Acquisition

    PubMed Central

    Andrews, Edna; Frigau, Luca; Voyvodic-Casabo, Clara; Voyvodic, James; Wright, John

    2013-01-01

    BOLD fMRI is often used for the study of human language. However, there are still very few attempts to conduct longitudinal fMRI studies in the study of language acquisition by measuring auditory comprehension and reading. The following paper is the first in a series concerning a unique longitudinal study devoted to the analysis of bi- and multilingual subjects who are: (1) already proficient in at least two languages; or (2) are acquiring Russian as a second/third language. The focus of the current analysis is to present data from the auditory sections of a set of three scans acquired from April, 2011 through April, 2012 on a five-person subject pool who are learning Russian during the study. All subjects were scanned using the same protocol for auditory comprehension on the same General Electric LX 3T Signa scanner in Duke University Hospital. Using a multivariate analysis of covariance (MANCOVA) for statistical analysis, proficiency measurements are shown to correlate significantly with scan results in the Russian conditions over time. The importance of both the left and right hemispheres in language processing is discussed. Special attention is devoted to the importance of contextualizing imaging data with corresponding behavioral and empirical testing data using a multivariate analysis of variance. This is the only study to date that includes: (1) longitudinal fMRI data with subject-based proficiency and behavioral data acquired in the same time frame; and (2) statistical modeling that demonstrates the importance of covariate language proficiency data for understanding imaging results of language acquisition. PMID:24961428

  5. Multilingualism and fMRI: Longitudinal Study of Second Language Acquisition.

    PubMed

    Andrews, Edna; Frigau, Luca; Voyvodic-Casabo, Clara; Voyvodic, James; Wright, John

    2013-05-28

    BOLD fMRI is often used for the study of human language. However, there are still very few attempts to conduct longitudinal fMRI studies in the study of language acquisition by measuring auditory comprehension and reading. The following paper is the first in a series concerning a unique longitudinal study devoted to the analysis of bi- and multilingual subjects who are: (1) already proficient in at least two languages; or (2) are acquiring Russian as a second/third language. The focus of the current analysis is to present data from the auditory sections of a set of three scans acquired from April, 2011 through April, 2012 on a five-person subject pool who are learning Russian during the study. All subjects were scanned using the same protocol for auditory comprehension on the same General Electric LX 3T Signa scanner in Duke University Hospital. Using a multivariate analysis of covariance (MANCOVA) for statistical analysis, proficiency measurements are shown to correlate significantly with scan results in the Russian conditions over time. The importance of both the left and right hemispheres in language processing is discussed. Special attention is devoted to the importance of contextualizing imaging data with corresponding behavioral and empirical testing data using a multivariate analysis of variance. This is the only study to date that includes: (1) longitudinal fMRI data with subject-based proficiency and behavioral data acquired in the same time frame; and (2) statistical modeling that demonstrates the importance of covariate language proficiency data for understanding imaging results of language acquisition.

  6. A Comparison and Evaluation of Real-Time Software Systems Modeling Languages

    NASA Technical Reports Server (NTRS)

    Evensen, Kenneth D.; Weiss, Kathryn Anne

    2010-01-01

    A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.

  7. Syllable Onset Intervals as an Indicator of Discourse and Syntactic Boundaries in Taiwan Mandarin

    ERIC Educational Resources Information Center

    Fon, Janice; Johnson, Keith

    2004-01-01

    This study looks at the syllable onset interval (SOI) patterning in Taiwan Mandarin spontaneous speech and its relationship to discourse and syntactic units. Monologs were elicited by asking readers to tell stories depicted in comic strips and were transcribed and segmented into Discourse Segment Units (Grosz & Sidner, 1986), clauses, and…

  8. Timing of translation in cross-language qualitative research.

    PubMed

    Santos, Hudson P O; Black, Amanda M; Sandelowski, Margarete

    2015-01-01

    Although there is increased understanding of language barriers in cross-language studies, the point at which language transformation processes are applied in research is inconsistently reported, or treated as a minor issue. Differences in translation timeframes raise methodological issues related to the material to be translated, as well as for the process of data analysis and interpretation. In this article we address methodological issues related to the timing of translation from Portuguese to English in two international cross-language collaborative research studies involving researchers from Brazil, Canada, and the United States. One study entailed late-phase translation of a research report, whereas the other study involved early phase translation of interview data. The timing of translation in interaction with the object of translation should be considered, in addition to the language, cultural, subject matter, and methodological competencies of research team members. © The Author(s) 2014.

  9. The Nature of the Phonological Processing in French Dyslexic Children: Evidence for the Phonological Syllable and Linguistic Features' Role in Silent Reading and Speech Discrimination

    ERIC Educational Resources Information Center

    Maionchi-Pino, Norbert; Magnan, Annie; Ecalle, Jean

    2010-01-01

    This study investigated the status of phonological representations in French dyslexic children (DY) compared with reading level- (RL) and chronological age-matched (CA) controls. We focused on the syllable's role and on the impact of French linguistic features. In Experiment 1, we assessed oral discrimination abilities of pairs of syllables that…

  10. Vowels, Syllables, and Letter Names: Differences between Young Children's Spelling in English and Portuguese

    ERIC Educational Resources Information Center

    Pollo, Tatiana Cury; Kessler, Brett; Treiman, Rebecca

    2005-01-01

    Young Portuguese-speaking children have been reported to produce more vowel- and syllable-oriented spellings than have English speakers. To investigate the extent and source of such differences, we analyzed children's vocabulary and found that Portuguese words have more vowel letter names and a higher vowel-consonant ratio than do English words.…

  11. Teacher knowledge of basic language concepts and dyslexia.

    PubMed

    Washburn, Erin K; Joshi, R Malatesha; Binks-Cantrell, Emily S

    2011-05-01

    Roughly one-fifth of the US population displays one or more symptoms of dyslexia: a specific learning disability that affects an individual's ability to process written language. Consequently, elementary school teachers are teaching students who struggle with inaccurate or slow reading, poor spelling, poor writing, and other language processing difficulties. Findings from studies have indicated that teachers lack essential knowledge needed to teach struggling readers, particularly children with dyslexia. However, few studies have sought to assess teachers' knowledge and perceptions about dyslexia in conjunction with knowledge of basic language concepts related to reading instruction. Thus, the purpose of the present study was to examine elementary school teachers' knowledge of basic language concepts and their knowledge and perceptions about dyslexia. Findings from the present study indicated that teachers, on average, were able to display implicit skills related to certain basic language concepts (i.e. syllable counting), but failed to demonstrate explicit knowledge of others (i.e. phonics principles). Also, teachers seemed to hold the common misconception that dyslexia is a visual processing deficit rather than phonological processing deficit. Copyright © 2011 John Wiley & Sons, Ltd.

  12. Real-Time Multiprocessor Programming Language (RTMPL) user's manual

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.

    1985-01-01

    A real-time multiprocessor programming language (RTMPL) has been developed to provide for high-order programming of real-time simulations on systems of distributed computers. RTMPL is a structured, engineering-oriented language. The RTMPL utility supports a variety of multiprocessor configurations and types by generating assembly language programs according to user-specified targeting information. Many programming functions are assumed by the utility (e.g., data transfer and scaling) to reduce the programming chore. This manual describes RTMPL from a user's viewpoint. Source generation, applications, utility operation, and utility output are detailed. An example simulation is generated to illustrate many RTMPL features.

  13. A locus for an auditory processing deficit and language impairment in an extended pedigree maps to 12p13.31-q14.3

    PubMed Central

    Addis, L; Friederici, A D; Kotz, S A; Sabisch, B; Barry, J; Richter, N; Ludwig, A A; Rübsamen, R; Albert, F W; Pääbo, S; Newbury, D F; Monaco, A P

    2010-01-01

    Despite the apparent robustness of language learning in humans, a large number of children still fail to develop appropriate language skills despite adequate means and opportunity. Most cases of language impairment have a complex etiology, with genetic and environmental influences. In contrast, we describe a three-generation German family who present with an apparently simple segregation of language impairment. Investigations of the family indicate auditory processing difficulties as a core deficit. Affected members performed poorly on a nonword repetition task and present with communication impairments. The brain activation pattern for syllable duration as measured by event-related brain potentials showed clear differences between affected family members and controls, with only affected members displaying a late discrimination negativity. In conjunction with psychoacoustic data showing deficiencies in auditory duration discrimination, the present results indicate increased processing demands in discriminating syllables of different duration. This, we argue, forms the cognitive basis of the observed language impairment in this family. Genome-wide linkage analysis showed a haplotype in the central region of chromosome 12 which reaches the maximum possible logarithm of odds ratio (LOD) score and fully co-segregates with the language impairment, consistent with an autosomal dominant, fully penetrant mode of inheritance. Whole genome analysis yielded no novel inherited copy number variants strengthening the case for a simple inheritance pattern. Several genes in this region of chromosome 12 which are potentially implicated in language impairment did not contain polymorphisms likely to be the causative mutation, which is as yet unknown. PMID:20345892

  14. Invented Spelling, Word Stress, and Syllable Awareness in Relation to Reading Difficulties in Children.

    PubMed

    Mehta, Sheena; Ding, Yi; Ness, Molly; Chen, Eric C

    2018-06-01

    The study assessed the clinical utility of an invented spelling tool and determined whether invented spelling with linguistic manipulation at segmental and supra-segmental levels can be used to better identify reading difficulties. We conducted linguistic manipulation by using real and nonreal words, incorporating word stress, alternating the order of consonants and vowels, and alternating the number of syllables. We recruited 60 third-grade students, of which half were typical readers and half were poor readers. The invented spelling task consistently differentiated those with reading difficulties from typical readers. It explained unique variance in conventional spelling, but not in word reading. Word stress explained unique variance in both word reading and conventional spelling, highlighting the importance of addressing phonological awareness at the supra-segmental level. Poor readers had poorer performance when spelling both real and nonreal words and demonstrated substantial difficulty in detecting word stress. Poor readers struggled with spelling words with double consonants at the beginning and ending of words, and performed worse on spelling two- and three-syllable words than typical readers. Practical implications for early identification and instruction are discussed.

  15. Verbal cues effectively orient children's auditory attention in a CV-syllable dichotic listening paradigm.

    PubMed

    Phélip, Marion; Donnot, Julien; Vauclair, Jacques

    2015-12-18

    In their groundbreaking work featuring verbal dichotic listening tasks, Mondor and Bryden showed that tone cues do not enhance children's attentional orienting, in contrast to adults. The magnitude of the children's right-ear advantage was not attenuated when their attention was directed to the left ear. Verbal cues did, however, appear to favour the orientation of attention at around 10 years, although stimulus-onset asynchronies (SOAs), which ranged between 450 and 750 ms, were not rigorously controlled. The aim of our study was therefore to investigate the role of both types of cues in a typical CV-syllable dichotic listening task administered to 8- to 10-year-olds, applying a protocol as similar as possible to that used by Mondor and Bryden, but controlling for SOA as well as for cued ear. Results confirmed that verbal cues are more effective than tone cues in orienting children's attention. However, in contrast to adults, no effect of SOA was observed. We discuss the relative difficulty young children have processing CV syllables, as well as the role of top-down processes in attentional orienting abilities.

  16. Language Promotes False-Belief Understanding

    PubMed Central

    Pyers, Jennie E.; Senghas, Ann

    2010-01-01

    Developmental studies have identified a strong correlation in the timing of language development and false-belief understanding. However, the nature of this relationship remains unresolved. Does language promote false-belief understanding, or does it merely facilitate development that could occur independently, albeit on a delayed timescale? We examined language development and false-belief understanding in deaf learners of an emerging sign language in Nicaragua. The use of mental-state vocabulary and performance on a low-verbal false-belief task were assessed, over 2 years, in adult and adolescent users of Nicaraguan Sign Language. Results show that those adults who acquired a nascent form of the language during childhood produce few mental-state signs and fail to exhibit false-belief understanding. Furthermore, those whose language developed over the period of the study correspondingly developed in false-belief understanding. Thus, language learning, over and above social experience, drives the development of a mature theory of mind. PMID:19515119

  17. Cost evaluation of a DSN high level real-time language

    NASA Technical Reports Server (NTRS)

    Mckenzie, M.

    1977-01-01

    The hypothesis that the implementation of a DSN High Level Real Time Language will reduce real time software expenditures is explored. The High Level Real Time Language is found to be both affordable and cost-effective.

  18. Rapid Syllable Transitions (ReST) treatment for Childhood Apraxia of Speech: the effect of lower dose-frequency.

    PubMed

    Thomas, Donna C; McCabe, Patricia; Ballard, Kirrie J

    2014-01-01

    This study investigated the effectiveness of twice-weekly Rapid Syllable Transitions (ReST) treatment for Childhood Apraxia of Speech (CAS). ReST is an effective treatment at a frequency of four sessions a week for three consecutive weeks. In this study we used a multiple-baselines across participants design to examine treatment efficacy for four children with CAS, aged four to eight years, who received ReST treatment twice a week for six weeks. The children's ability to acquire new skills, generalize these skills to untreated items and maintain the skills after treatment was examined. All four children improved their production of the target items. Two of the four children generalized the treatment effects to similar untreated pseudo words and all children generalized to untreated real words. During the maintenance phase, all four participants maintained their skills to four months post-treatment, with a stable rather than rising profile. This study shows that ReST treatment delivered twice-weekly results in significant retention of treatment effects to four months post-treatment and generalization to untrained but related speech behaviors. Compared to ReST therapy four times per week, the twice-weekly frequency produces similar treatment gains but no ongoing improvement after the cessation of treatment. This implies that there may be a small but significant benefit of four times weekly therapy compared with twice-weekly ReST therapy. Readers will be able to define dose-frequency, and describe how this relates to overall intervention intensity. Readers will be able to explain the acquisition, generalization and maintenance effects in the study and describe how these compare to higher dose frequency treatments. Readers will recognize that the current findings give preliminary support for high dose-frequency CAS treatment. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Insights into Second Language Acquisition Theory and Different Approaches to Language Teaching

    ERIC Educational Resources Information Center

    Ponniah, Joseph

    2010-01-01

    This paper attempts to review second language acquisition theory and some of the methods practiced in language classes. The review substantiates that comprehensible input as the crucial determining factor for language acquisition and consciously learned linguistic knowledge can be used only to edit the output of the acquired language sometimes…

  20. Schizophrenia and second language acquisition.

    PubMed

    Bersudsky, Yuly; Fine, Jonathan; Gorjaltsan, Igor; Chen, Osnat; Walters, Joel

    2005-05-01

    Language acquisition involves brain processes that can be affected by lesions or dysfunctions in several brain systems and second language acquisition may depend on different brain substrates than first language acquisition in childhood. A total of 16 Russian immigrants to Israel, 8 diagnosed schizophrenics and 8 healthy immigrants, were compared. The primary data for this study were collected via sociolinguistic interviews. The two groups use language and learn language in very much the same way. Only exophoric reference and blocking revealed meaningful differences between the schizophrenics and healthy counterparts. This does not mean of course that schizophrenia does not induce language abnormalities. Our study focuses on those aspects of language that are typically difficult to acquire in second language acquisition. Despite the cognitive compromises in schizophrenia and the manifest atypicalities in language of speakers with schizophrenia, the process of acquiring a second language seems relatively unaffected by schizophrenia.

  1. Language Model Combination and Adaptation Using Weighted Finite State Transducers

    NASA Technical Reports Server (NTRS)

    Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.

    2010-01-01

    In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences

  2. Real-time lexical comprehension in young children learning American Sign Language.

    PubMed

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  3. Language and Cognition Interaction Neural Mechanisms

    PubMed Central

    Perlovsky, Leonid

    2011-01-01

    How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is acquired from surrounding language “ready-made” and therefore can be acquired early in life. This early acquisition of language in childhood encompasses the entire hierarchy from sounds to words, to phrases, and to highest concepts existing in culture. Cognition is developed from experience. Yet cognition cannot be acquired from experience alone; language is a necessary intermediary, a “teacher.” A mathematical model is developed; it overcomes previous difficulties and leads to a computational theory. This model is consistent with Arbib's “language prewired brain” built on top of mirror neuron system. It models recent neuroimaging data about cognition, remaining unnoticed by other theories. A number of properties of language and cognition are explained, which previously seemed mysterious, including influence of language grammar on cultural evolution, which may explain specifics of English and Arabic cultures. PMID:21876687

  4. Rhythm's Gonna Get You: Regular Meter Facilitates Semantic Sentence Processing

    ERIC Educational Resources Information Center

    Rothermich, Kathrin; Schmidt-Kassow, Maren; Kotz, Sonja A.

    2012-01-01

    Rhythm is a phenomenon that fundamentally affects the perception of events unfolding in time. In language, we define "rhythm" as the temporal structure that underlies the perception and production of utterances, whereas "meter" is defined as the regular occurrence of beats (i.e. stressed syllables). In stress-timed languages such as German, this…

  5. Can Non-Interactive Language Input Benefit Young Second-Language Learners?

    ERIC Educational Resources Information Center

    Au, Terry Kit-fong; Chan, Winnie Wailan; Cheng, Liao; Siegel, Linda S.; Tso, Ricky Van Yip

    2015-01-01

    To fully acquire a language, especially its phonology, children need linguistic input from native speakers early on. When interaction with native speakers is not always possible--e.g. for children learning a second language that is not the societal language--audios are commonly used as an affordable substitute. But does such non-interactive input…

  6. Time in Language, Language in Time

    ERIC Educational Resources Information Center

    Klein, Wolfgang

    2008-01-01

    Many millenia ago, a number of genetic changes endowed the human species with the remarkable capacity: (1) to construct highly complex systems of expressions--human languages; (2) to copy such systems, once created, from other members of the species; and (3) to use them for communicative and perhaps other purposes. This capacity is not uniform; it…

  7. The Parsing Syllable Envelopes Test for Assessment of Amplitude Modulation Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

    PubMed

    Cameron, Sharon; Chong-White, Nicky; Mealings, Kiri; Beechey, Tim; Dillon, Harvey; Young, Taegan

    2018-02-01

    Intensity peaks and valleys in the acoustic signal are salient cues to syllable structure, which is accepted to be a crucial early step in phonological processing. As such, the ability to detect low-rate (envelope) modulations in signal amplitude is essential to parse an incoming speech signal into smaller phonological units. The Parsing Syllable Envelopes (ParSE) test was developed to quantify the ability of children to recognize syllable boundaries using an amplitude modulation detection paradigm. The envelope of a 750-msec steady-state /a/ vowel is modulated into two or three pseudo-syllables using notches with modulation depths varying between 0% and 100% along an 11-step continuum. In an adaptive three-alternative forced-choice procedure, the participant identified whether one, two, or three pseudo-syllables were heard. Development of the ParSE stimuli and test protocols, and collection of normative and test-retest reliability data. Eleven adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 10 mo) and 134 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 72 females. Data were collected using a touchscreen computer. Psychometric functions (PFs) were automatically fit to individual data by the ParSE software. Performance was related to the modulation depth at which syllables can be detected with 88% accuracy (referred to as the upper boundary of the uncertainty region [UBUR]). A shallower PF slope reflected a greater level of uncertainty. Age effects were determined based on raw scores. z Scores were calculated to account for the effect of age on performance. Outliers, and individual data for which the confidence interval of the UBUR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance. Across participants, the performance criterion (UBUR) was met with a median modulation depth of 42%. The effect of age on the UBUR was

  8. Sensor-Generated Time Series Events: A Definition Language

    PubMed Central

    Anguera, Aurea; Lara, Juan A.; Lizcano, David; Martínez, Maria Aurora; Pazos, Juan

    2012-01-01

    There are now a great many domains where information is recorded by sensors over a limited time period or on a permanent basis. This data flow leads to sequences of data known as time series. In many domains, like seismography or medicine, time series analysis focuses on particular regions of interest, known as events, whereas the remainder of the time series contains hardly any useful information. In these domains, there is a need for mechanisms to identify and locate such events. In this paper, we propose an events definition language that is general enough to be used to easily and naturally define events in time series recorded by sensors in any domain. The proposed language has been applied to the definition of time series events generated within the branch of medicine dealing with balance-related functions in human beings. A device, called posturograph, is used to study balance-related functions. The platform has four sensors that record the pressure intensity being exerted on the platform, generating four interrelated time series. As opposed to the existing ad hoc proposals, the results confirm that the proposed language is valid, that is generally applicable and accurate, for identifying the events contained in the time series.

  9. The effects of age, viewing distance, display type, font type, colour contrast and number of syllables on the legibility of Korean characters.

    PubMed

    Kong, Yong-Ku; Lee, Inseok; Jung, Myung-Chul; Song, Young-Woong

    2011-05-01

    This study evaluated the effects of age (20s and 60s), viewing distance (50 cm, 200 cm), display type (paper, monitor), font type (Gothic, Ming), colour contrast (black letters on white background, white letters on black background) and number of syllables (one, two) on the legibility of Korean characters by using the four legibility measures (minimum letter size for 100% correctness, maximum letter size for 0% correctness, minimum letter size for the least discomfort and maximum letter size for the most discomfort). Ten subjects in each age group read the four letters presented on a slide (letter size varied from 80 pt to 2 pt). Subjects also subjectively rated the reading discomfort of the letters on a 4-point scale (1 = no discomfort, 4 = most discomfort). According to the ANOVA procedure, age, viewing distance and font type significantly affected the four dependent variables (p < 0.05), while the main effect of colour contrast was not statistically significant for any measures. Two-syllable letters had smaller letters than one-syllable letters in the two correctness measures. The younger group could see letter sizes two times smaller than the old group could and the viewing distance of 50 cm showed letters about three times smaller than those at a 200 cm viewing distance. The Gothic fonts were smaller than the Ming fonts. Monitors were smaller than paper for correctness and maximum letter size for the most discomfort. From a comparison of the results for correctness and discomfort, people generally preferred larger letter sizes to those that they could read. The findings of this study may provide basic information for setting a global standard of letter size or font type to improve the legibility of characters written in Korean. STATEMENT OF RELEVANCE: Results obtained in this study will provide basic information and guidelines for setting standards of letter size and font type to improve the legibility of characters written in Korean. Also, the results might

  10. Acquiring Cultural Perceptions during Study Abroad: The Influence of Youthful Associates

    ERIC Educational Resources Information Center

    Meredith, R. Alan

    2010-01-01

    The interdependence of language and culture highlights the need to find methods for second language students to acquire cultural information and practices. This article reviews definitions of culture posited by anthropologists and language educators and discusses problems related to the recent paradigm shift from "small "c" and big…

  11. The Simple View of Reading in Bilingual Language-Minority Children Acquiring a Highly Transparent Second Language

    ERIC Educational Resources Information Center

    Bonifacci, Paola; Tobia, Valentina

    2017-01-01

    The present study evaluated which components within the simple view of reading model better predicted reading comprehension in a sample of bilingual language-minority children exposed to Italian, a highly transparent language, as a second language. The sample included 260 typically developing bilingual children who were attending either the first…

  12. Broca's area and the language instinct.

    PubMed

    Musso, Mariacristina; Moro, Andrea; Glauche, Volkmar; Rijntjes, Michel; Reichenbach, Jürgen; Büchel, Christian; Weiller, Cornelius

    2003-07-01

    Language acquisition in humans relies on abilities like abstraction and use of syntactic rules, which are absent in other animals. The neural correlate of acquiring new linguistic competence was investigated with two functional magnetic resonance imaging (fMRI) studies. German native speakers learned a sample of 'real' grammatical rules of different languages (Italian or Japanese), which, although parametrically different, follow the universal principles of grammar (UG). Activity during this task was compared with that during a task that involved learning 'unreal' rules of language. 'Unreal' rules were obtained manipulating the original two languages; they used the same lexicon as Italian or Japanese, but were linguistically illegal, as they violated the principles of UG. Increase of activation over time in Broca's area was specific for 'real' language acquisition only, independent of the kind of language. Thus, in Broca's area, biological constraints and language experience interact to enable linguistic competence for a new language.

  13. Can Student Teachers Acquire Core Skills for Teaching from Part-Time Employment?

    ERIC Educational Resources Information Center

    Wylie, Ken; Cummins, Brian

    2013-01-01

    Part-time employment among university students has become commonplace internationally. Research has largely focused on the impact of part-time employment on academic performance. This research takes an original approach in that it poses the question whether students can acquire core skills relevant to teaching from their part-time employment. The…

  14. Wandering tales: evolutionary origins of mental time travel and language

    PubMed Central

    Corballis, Michael C.

    2013-01-01

    A central component of mind wandering is mental time travel, the calling to mind of remembered past events and of imagined future ones. Mental time travel may also be critical to the evolution of language, which enables us to communicate about the non-present, sharing memories, plans, and ideas. Mental time travel is indexed in humans by hippocampal activity, and studies also suggest that the hippocampus in rats is active when the animals replay or pre play activity in a spatial environment, such as a maze. Mental time travel may have ancient origins, contrary to the view that it is unique to humans. Since mental time travel is also thought to underlie language, these findings suggest that language evolved gradually from pre-existing cognitive capacities, contrary to the view of Chomsky and others that language and symbolic thought emerged abruptly, in a single step, within the past 100,000 years. PMID:23908641

  15. Language impairment is reflected in auditory evoked fields.

    PubMed

    Pihko, Elina; Kujala, Teija; Mickos, Annika; Alku, Paavo; Byring, Roger; Korkman, Marit

    2008-05-01

    Specific language impairment (SLI) is diagnosed when a child has problems in producing or understanding language despite having a normal IQ and there being no other obvious explanation. There can be several associated problems, and no single underlying cause has yet been identified. Some theories propose problems in auditory processing, specifically in the discrimination of sound frequency or rapid temporal frequency changes. We compared automatic cortical speech-sound processing and discrimination between a group of children with SLI and control children with normal language development (mean age: 6.6 years; range: 5-7 years). We measured auditory evoked magnetic fields using two sets of CV syllables, one with a changing consonant /da/ba/ga/ and another one with a changing vowel /su/so/sy/ in an oddball paradigm. The P1m responses for onsets of repetitive stimuli were weaker in the SLI group whereas no significant group differences were found in the mismatch responses. The results indicate that the SLI group, having weaker responses to the onsets of sounds, might have slightly depressed sensory encoding.

  16. Identifying Specific Language Impairment in Deaf Children Acquiring British Sign Language: Implications for Theory and Practice

    ERIC Educational Resources Information Center

    Mason, Kathryn; Rowley, Katherine; Marshall, Chloe R.; Atkinson, Joanna R.; Herman, Rosalind; Woll, Bencie; Morgan, Gary

    2010-01-01

    This paper presents the first ever group study of specific language impairment (SLI) in users of sign language. A group of 50 children were referred to the study by teachers and speech and language therapists. Individuals who fitted pre-determined criteria for SLI were then systematically assessed. Here, we describe in detail the performance of 13…

  17. Time series patterns and language support in DBMS

    NASA Astrophysics Data System (ADS)

    Telnarova, Zdenka

    2017-07-01

    This contribution is focused on pattern type Time Series as a rich in semantics representation of data. Some example of implementation of this pattern type in traditional Data Base Management Systems is briefly presented. There are many approaches how to manipulate with patterns and query patterns. Crucial issue can be seen in systematic approach to pattern management and specific pattern query language which takes into consideration semantics of patterns. Query language SQL-TS for manipulating with patterns is shown on Time Series data.

  18. Spoken language outcomes after hemispherectomy: factoring in etiology.

    PubMed

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  19. Language balance and switching ability in children acquiring English as a second language.

    PubMed

    Goriot, Claire; Broersma, Mirjam; McQueen, James M; Unsworth, Sharon; van Hout, Roeland

    2018-09-01

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4-5, 8-9, and 11-12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch-English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Effect of Syllable Congruency in Sixth Graders in the Lexical Decision Task with Masked Priming

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Mathey, Stephanie

    2012-01-01

    The aim of this study was to investigate the role of the syllable in visual recognition of French words in Grade 6. To do so, the syllabic congruency effect was examined in the lexical decision task combined with masked priming. Target words were preceded by pseudoword primes sharing the first letters that either corresponded to the syllable…

  1. A comparison of aphasic and non-brain-injured adults on a dichotic CV-syllable listening task.

    PubMed

    Shanks, J; Ryan, W

    1976-06-01

    A dichotic CV-syllable listening task was administered to a group of eleven non-brain-injured adults and to a group of eleven adult aphasics. The results of this study may be summarized as follows: 1)The group of non-brain-injured adults showed a slight right ear advantage for dichotically presented CV-syllables. 2)In comparison with the control group the asphasic group showed a bilateral deficit in response to the dichotic CV-syllables, superimposed on a non-significant right ear advantage. 3) The asphasic group demonstrated a great deal of intersubject variability on the dichotic task with six aphasics showing a right ear preference for the stimuli. The non-brain-injured subjects performed more homogeneously on the task. 4) The two subgroups of aphasics, a right ear advantage group and a left ear advantage group, performed significantly different on the dichotic listening task. 5) Single correct data analysis proved valuable by deleting accuracy of report for an examination of trials in which there was true competition for the single left hemispheric speech processor. These results were analyzed in terms of a functional model of auditory processing. In view of this model, the bilateral deficit in dichotic performance of the asphasic group was accounted for by the presence of a lesion within the dominant left hemisphere, where the speech signals from both ears converge for final processing. The right ear advantage shown by one asphasic subgroup was explained by a lesion interfering with the corpus callosal pathways from the left hemisphere; the left ear advantage observed within the other subgroup was explained by a lesion in the area of the auditory processor of the left hemisphere.

  2. Development and transfer of vocabulary knowledge in Spanish-speaking language minority preschool children.

    PubMed

    Goodrich, J Marc; Lonigan, Christopher J; Kleuver, Cherie G; Farver, Joann M

    2016-09-01

    In this study we evaluated the predictive validity of conceptual scoring. Two independent samples of Spanish-speaking language minority preschoolers (Sample 1: N = 96, mean age = 54·51 months, 54·3% male; Sample 2: N = 116, mean age = 60·70 months, 56·0% male) completed measures of receptive, expressive, and definitional vocabulary in their first (L1) and second (L2) languages at two time points approximately 9-12 months apart. We examined whether unique L1 and L2 vocabulary at time 1 predicted later L2 and L1 vocabulary, respectively. Results indicated that unique L1 vocabulary did not predict later L2 vocabulary after controlling for initial L2 vocabulary. An identical pattern of results emerged for L1 vocabulary outcomes. We also examined whether children acquired translational equivalents for words known in one language but not the other. Results indicated that children acquired translational equivalents, providing partial support for the transfer of vocabulary knowledge across languages.

  3. Rhythm in Ethiopian English: Implications for the Teaching of English Prosody

    ERIC Educational Resources Information Center

    Gashaw, Anegagregn

    2017-01-01

    In order to verify that English speeches produced by Ethiopian speakers fall under syllable-timed or stress-timed rhythm, the study tried to examine the nature of stress and rhythm in the pronunciation of Ethiopian speakers of English by focusing on one language group speaking Amharic as a native language. Using acoustic analysis of the speeches…

  4. Infant diet-related changes in syllable processing between 4 and 5 months: Implications for developing native language sensitivity

    USDA-ARS?s Scientific Manuscript database

    Since maturational processes triggering increased attunement to native language features in early infancy are sensitive to dietary factors, infant-diet related differences in brain processing of native-language speech stimuli might indicate variations in onset of this tuning process. We measured cor...

  5. Songs as an aid for language acquisition.

    PubMed

    Schön, Daniele; Boyer, Maud; Moreno, Sylvain; Besson, Mireille; Peretz, Isabelle; Kolinsky, Régine

    2008-02-01

    In previous research, Saffran and colleagues [Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274, 1926-1928; Saffran, J. R., Newport, E. L., & Aslin, R. N. (1996). Word segmentation: The role of distributional cues. Journal of Memory and Language, 35, 606-621.] have shown that adults and infants can use the statistical properties of syllable sequences to extract words from continuous speech. They also showed that a similar learning mechanism operates with musical stimuli [Saffran, J. R., Johnson, R. E. K., Aslin, N., & Newport, E. L. (1999). Abstract Statistical learning of tone sequences by human infants and adults. Cognition, 70, 27-52.]. In this work we combined linguistic and musical information and we compared language learning based on speech sequences to language learning based on sung sequences. We hypothesized that, compared to speech sequences, a consistent mapping of linguistic and musical information would enhance learning. Results confirmed the hypothesis showing a strong learning facilitation of song compared to speech. Most importantly, the present results show that learning a new language, especially in the first learning phase wherein one needs to segment new words, may largely benefit of the motivational and structuring properties of music in song.

  6. Neurocomputational Consequences of Evolutionary Connectivity Changes in Perisylvian Language Cortex.

    PubMed

    Schomers, Malte R; Garagnani, Max; Pulvermüller, Friedemann

    2017-03-15

    The human brain sets itself apart from that of its primate relatives by specific neuroanatomical features, especially the strong linkage of left perisylvian language areas (frontal and temporal cortex) by way of the arcuate fasciculus (AF). AF connectivity has been shown to correlate with verbal working memory-a specifically human trait providing the foundation for language abilities-but a mechanistic explanation of any related causal link between anatomical structure and cognitive function is still missing. Here, we provide a possible explanation and link, by using neurocomputational simulations in neuroanatomically structured models of the perisylvian language cortex. We compare networks mimicking key features of cortical connectivity in monkeys and humans, specifically the presence of relatively stronger higher-order "jumping links" between nonadjacent perisylvian cortical areas in the latter, and demonstrate that the emergence of working memory for syllables and word forms is a functional consequence of this structural evolutionary change. We also show that a mere increase of learning time is not sufficient, but that this specific structural feature, which entails higher connectivity degree of relevant areas and shorter sensorimotor path length, is crucial. These results offer a better understanding of specifically human anatomical features underlying the language faculty and their evolutionary selection advantage. SIGNIFICANCE STATEMENT Why do humans have superior language abilities compared to primates? Recently, a uniquely human neuroanatomical feature has been demonstrated in the strength of the arcuate fasciculus (AF), a fiber pathway interlinking the left-hemispheric language areas. Although AF anatomy has been related to linguistic skills, an explanation of how this fiber bundle may support language abilities is still missing. We use neuroanatomically structured computational models to investigate the consequences of evolutionary changes in language area

  7. Literacy and Second Language Intervention for Adult Hebrew Second Language (HSL) Learners

    ERIC Educational Resources Information Center

    Fanta-Vagenshtein, Yarden

    2011-01-01

    Language proficiency is a crucial factor for immigrants to integrate successfully in the new society in all aspects of life, especially in the labor market. As a result, there is great importance in acquiring the new language as quickly and effectively as possible. Several factors affect second language acquisition, including motivation, age,…

  8. A Corpus-Based Comparative Study of "Learn" and "Acquire"

    ERIC Educational Resources Information Center

    Yang, Bei

    2016-01-01

    As an important yet intricate linguistic feature in English language, synonymy poses a great challenge for second language learners. Using the 100 million-word British National Corpus (BNC) as data and the software Sketch Engine (SkE) as an analyzing tool, this article compares the usage of "learn" and "acquire" used in natural…

  9. Reading acquisition, AAC and the transferability of english research to languages with more consistent or transparent orthographies.

    PubMed

    Erickson, Karen; Sachse, Stefanie

    2010-09-01

    Research on reading in augmentative and alternative communication (AAC) is primarily provided for the English language, which has nontransparent orthographic depth and a complex syllable structure. While there is a great deal to learn about English reading in AAC, there is substantially more information regarding reading in AAC in English than in other languages. In this article we compare reading acquisition in English and German, drawing from the existing research regarding reading for children with complex communication needs and describing how that might apply to German and other European languages with orthography that is more consistent than English (e.g., Swedish, Spanish, Finnish; Aro & Wimmer, 2003). The goal is to support the development of cross-linguistic understandings in reading and AAC.

  10. Effect of tones on vocal attack time in Cantonese speakers.

    PubMed

    Ma, Estella P-M; Baken, R J; Roark, Rick M; Li, P-M

    2012-09-01

    Vocal attack time (VAT) is the time lag between the growth of the sound pressure signal and the development of physical contact of vocal folds at vocal initiation. It can be derived by a cross-correlation of short-time amplitude changes occurring in the sound pressure and electroglottographic (EGG) signals. Cantonese is a tone language in which tone determines the lexical meaning of the syllable. Such linguistic function of tone has implications for the physiology of tone production. The aim of the present study was to investigate the possible effects of Cantonese tones on VAT. Sound pressure and EGG signals were simultaneously recorded from 59 native Cantonese speakers (31 females and 28 males). The subjects were asked to read aloud 12 disyllabic words comprising homophone pairs of the six Cantonese lexical tones. Results revealed a gender difference in VAT values, with the mean VAT significantly smaller in females than in males. There was also a significant difference in VAT values between the two tone categories, with the mean VAT values of the three level tones (tone 1, 3, and 6) significantly smaller than those of the three contour tones (tone 2, 4, and 5). The findings support the notion that norms and interpretations based on nontone European languages may not be directly applied to tone languages. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  11. Investigating the Retention and Time-Course of Phonotactic Constraint Learning From Production Experience

    PubMed Central

    Warker, Jill A.

    2013-01-01

    Adults can rapidly learn artificial phonotactic constraints such as /f/ only occurs at the beginning of syllables by producing syllables that contain those constraints. This implicit learning is then reflected in their speech errors. However, second-order constraints in which the placement of a phoneme depends on another characteristic of the syllable (e.g., if the vowel is /æ/, /f/ occurs at the beginning of syllables and /s/ occurs at the end of syllables but if the vowel is /I/, the reverse is true) require a longer learning period. Two experiments question the transience of second-order learning and whether consolidation plays a role in learning phonological dependencies. Using speech errors as a measure of learning, Experiment 1 investigated the durability of learning, and Experiment 2 investigated the time-course of learning. Experiment 1 found that learning is still present in speech errors a week later. Experiment 2 looked at whether more time in the form of a consolidation period or more experience in the form of more trials was necessary for learning to be revealed in speech errors. Both consolidation and more trials led to learning; however, consolidation provided a more substantial benefit. PMID:22686839

  12. The Emotional Impact of Being Myself: Emotions and Foreign-Language Processing

    ERIC Educational Resources Information Center

    Ivaz, Lela; Costa, Albert; Duñabeitia, Jon Andoni

    2016-01-01

    Native languages are acquired in emotionally rich contexts, whereas foreign languages are typically acquired in emotionally neutral academic environments. As a consequence of this difference, it has been suggested that bilinguals' emotional reactivity in foreign-language contexts is reduced as compared with native language contexts. In the current…

  13. Sign Language and Language Acquisition in Man and Ape. New Dimensions in Comparative Pedolinguistics.

    ERIC Educational Resources Information Center

    Peng, Fred C. C., Ed.

    A collection of research materials on sign language and primatology is presented here. The essays attempt to show that: sign language is a legitimate language that can be learned not only by humans but by nonhuman primates as well, and nonhuman primates have the capability to acquire a human language using a different mode. The following…

  14. Phonological reduplication in sign language: Rules rule

    PubMed Central

    Berent, Iris; Dupuis, Amanda; Brentari, Diane

    2014-01-01

    Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL). As a case study, we examine reduplication (X→XX)—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such a rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating), and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task). The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal. PMID:24959158

  15. A Phenomenological Study: The Impacts of Developing Phonetic Awareness through Technological Resources on English Language Learners' (ELL) Communicative Competences

    ERIC Educational Resources Information Center

    Fabre-Merchán, Paolo; Torres-Jara, Gabriela; Andrade-Dominguez, Francisco; Ortiz-Zurita, Ma. José; Alvarez-Muñoz, Patricio

    2017-01-01

    Throughout our experience within the English Language Teaching (ELT) field and while acquiring a second language in English a Foreign Language (EFL) and English as a Second Language (ESL) settings, we have noticed that one of the main perceived challenges for English Language Learners (ELLs) is to effectively communicate. Most of the time, this…

  16. Preterm and Term Infants' Perception of Temporally Coordinated Syllable-Object Pairings: Implications for Lexical Development

    ERIC Educational Resources Information Center

    Gogate, Lakshmi; Maganti, Madhavilatha; Perenyi, Agnes

    2014-01-01

    Purpose: This experimental study examined term infants (n = 34) and low-risk near-term preterm infants (gestational age 32-36 weeks) at 2 months chronological age (n = 34) and corrected age (n = 16). The study investigated whether the preterm infants presented with a delay in their sensitivity to synchronous syllable-object pairings when compared…

  17. Inhibition accumulates over time at multiple processing levels in bilingual language control.

    PubMed

    Kleinman, Daniel; Gollan, Tamar H

    2018-04-01

    It is commonly assumed that bilinguals enable production in their nondominant language by inhibiting their dominant language temporarily, fully lifting inhibition to switch back. In a re-analysis of data from 416 Spanish-English bilinguals who repeatedly named a small set of pictures while switching languages in response to cues, we separated trials into different types that revealed three cumulative effects. Bilinguals named each picture (a) faster for every time they had previously named that same picture in the same language, an asymmetric repetition priming effect that was greater in their nondominant language, and (b) more slowly for every time they had previously named that same picture in the other language, an effect that was equivalent across languages and implies symmetric lateral inhibition between translation equivalents. Additionally, (c) bilinguals named pictures in the dominant language more slowly for every time they had previously named unrelated pictures in the nondominant language, exhibiting asymmetric language-wide global inhibition. These mechanisms dynamically alter the balances of activation between languages and between lemmas, providing evidence for an oft-assumed but seldom demonstrated key mechanism of bilingual control (competition between translations), resolving the mystery of why reversed language dominance sometimes emerges (the combined forces of asymmetrical effects emerge over time in mixed-language blocks), and also explaining other longer-lasting effects (block order). Key signatures of bilingual control can depend on seemingly trivial methodological details (e.g., the number of trials in a block) because inhibition is applied cumulatively at both local and global levels, persisting long after each individual act of selection. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Deconstructing the Southeast Asian Sesquisyllable: A Gestural Account

    ERIC Educational Resources Information Center

    Butler, Becky Ann

    2014-01-01

    This dissertation explores a purportedly unusual word type known as the "sesquisyllable," which has long been considered characteristic of mainland Southeast Asian languages. Sesquisyllables are traditionally defined as "one and a half" syllables, or as one "major" syllable preceded by one "minor" syllable,…

  19. Assessment Tools to Differentiate between Language Differences and Disorders in English Language Learners

    ERIC Educational Resources Information Center

    Shenoy, Sunaina

    2014-01-01

    English language learners (ELLs) who are in the process of acquiring English as a second language for academic purposes, are often misidentified as having Language Learning Disabilities (LLDs). Policies regarding the assessment of ELLs have undergone many changes through the years, such as the introduction of a Response to Intervention (RTI)…

  20. Concurrent Relations between Face Scanning and Language: A Cross-Syndrome Infant Study

    PubMed Central

    D’Souza, Dean; D’Souza, Hana; Johnson, Mark H.; Karmiloff-Smith, Annette

    2015-01-01

    Typically developing (TD) infants enhance their learning of spoken language by observing speakers’ mouth movements. Given the fact that word learning is seriously delayed in most children with neurodevelopmental disorders, we hypothesized that this delay partly results from differences in visual face scanning, e.g., focusing attention away from the mouth. To test this hypothesis, we used an eye tracker to measure visual attention in 95 infants and toddlers with Down syndrome (DS), fragile X syndrome (FXS), and Williams syndrome (WS), and compared their data to 25 chronological- and mental-age matched 16-month-old TD controls. We presented participants with two talking faces (one on each side of the screen) and a sound (/ga/). One face (the congruent face) mouthed the syllable that the participants could hear (i.e., /ga/), while the other face (the incongruent face) mouthed a different syllable (/ba/) from the one they could hear. As expected, we found that TD children with a relatively large vocabulary made more fixations to the mouth region of the incongruent face than elsewhere. However, toddlers with FXS or WS who had a relatively large receptive vocabulary made more fixations to the eyes (rather than the mouth) of the incongruent face. In DS, by contrast, fixations to the speaker’s overall face (rather than to her eyes or mouth) predicted vocabulary size. These findings suggest that, at some point in development, different processes or strategies relating to visual attention are involved in language acquisition in DS, FXS, and WS. This knowledge may help further explain why language is delayed in children with neurodevelopmental disorders. It also raises the possibility that syndrome-specific interventions should include an early focus on efficient face-scanning behaviour. PMID:26426329

  1. Concurrent Relations between Face Scanning and Language: A Cross-Syndrome Infant Study.

    PubMed

    D'Souza, Dean; D'Souza, Hana; Johnson, Mark H; Karmiloff-Smith, Annette

    2015-01-01

    Typically developing (TD) infants enhance their learning of spoken language by observing speakers' mouth movements. Given the fact that word learning is seriously delayed in most children with neurodevelopmental disorders, we hypothesized that this delay partly results from differences in visual face scanning, e.g., focusing attention away from the mouth. To test this hypothesis, we used an eye tracker to measure visual attention in 95 infants and toddlers with Down syndrome (DS), fragile X syndrome (FXS), and Williams syndrome (WS), and compared their data to 25 chronological- and mental-age matched 16-month-old TD controls. We presented participants with two talking faces (one on each side of the screen) and a sound (/ga/). One face (the congruent face) mouthed the syllable that the participants could hear (i.e., /ga/), while the other face (the incongruent face) mouthed a different syllable (/ba/) from the one they could hear. As expected, we found that TD children with a relatively large vocabulary made more fixations to the mouth region of the incongruent face than elsewhere. However, toddlers with FXS or WS who had a relatively large receptive vocabulary made more fixations to the eyes (rather than the mouth) of the incongruent face. In DS, by contrast, fixations to the speaker's overall face (rather than to her eyes or mouth) predicted vocabulary size. These findings suggest that, at some point in development, different processes or strategies relating to visual attention are involved in language acquisition in DS, FXS, and WS. This knowledge may help further explain why language is delayed in children with neurodevelopmental disorders. It also raises the possibility that syndrome-specific interventions should include an early focus on efficient face-scanning behaviour.

  2. Spoken verb processing in Spanish: An analysis using a new online resource

    PubMed Central

    Rivera, Semilla M.; Bates, Elizabeth A.; Orozco-Figueroa, Araceli; Wicha, Nicole Y. Y.

    2012-01-01

    Verbs are one of the basic building blocks of grammar, yet few studies have examined the grammatical, morphological, and phonological factors contributing to lexical access and production of Spanish verb inflection. This report describes an online data set that incorporates psycholinguistic dimensions for 50 of the most common early-acquired Spanish verbs. Using this data set, predictors of response time (RT) from stimulus onset and mean differences at offset are examined. Native Spanish speakers, randomly assigned to one of two tasks, listened to prerecorded verbs and either repeated the verb (single word shadowing) or produced its corresponding pronoun. Factors such as stimulus duration, number of syllables, syllable stress position, and specific levels of initial phoneme facilitated both shadowing of a verb and production of its pronoun. Higher frequency verbs facilitated faster verb repetition, whereas verbs with alternative pronouns increased RT to pronoun production. Mean differences at offset (stimulus duration is removed) indicated that listeners begin speaking earlier when the verb is longer and multisyllabic compared to shorter, monosyllabic words. These results highlight the association between psycholinguistic factors and RT measures of verb processing, in particular, features unique to languages like Spanish, such as alternative pronoun and tense. PMID:23002318

  3. Relations among Detection of Syllable Stress, Speech Abnormalities, and Communicative Ability in Adults with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Kargas, Niko; López, Beatriz; Morris, Paul; Reddy, Vasudevi

    2016-01-01

    Purpose: To date, the literature on perception of affective, pragmatic, and grammatical prosody abilities in autism spectrum disorders (ASD) has been sparse and contradictory. It is interesting to note that the primary perception of syllable stress within the word structure, which is crucial for all prosody functions, remains relatively unexplored…

  4. Caffeine enhances real-world language processing: evidence from a proofreading task.

    PubMed

    Brunyé, Tad T; Mahoney, Caroline R; Rapp, David N; Ditman, Tali; Taylor, Holly A

    2012-03-01

    Caffeine has become the most prevalently consumed psychostimulant in the world, but its influences on daily real-world functioning are relatively unknown. The present work investigated the effects of caffeine (0 mg, 100 mg, 200 mg, 400 mg) on a commonplace language task that required readers to identify and correct 4 error types in extended discourse: simple local errors (misspelling 1- to 2-syllable words), complex local errors (misspelling 3- to 5-syllable words), simple global errors (incorrect homophones), and complex global errors (incorrect subject-verb agreement and verb tense). In 2 placebo-controlled, double-blind studies using repeated-measures designs, we found higher detection and repair rates for complex global errors, asymptoting at 200 mg in low consumers (Experiment 1) and peaking at 400 mg in high consumers (Experiment 2). In both cases, covariate analyses demonstrated that arousal state mediated the relationship between caffeine consumption and the detection and repair of complex global errors. Detection and repair rates for the other 3 error types were not affected by caffeine consumption. Taken together, we demonstrate that caffeine has differential effects on error detection and repair as a function of dose and error type, and this relationship is closely tied to caffeine's effects on subjective arousal state. These results support the notion that central nervous system stimulants may enhance global processing of language-based materials and suggest that such effects may originate in caffeine-related right hemisphere brain processes. Implications for understanding the relationships between caffeine consumption and real-world cognitive functioning are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  5. Examining the Role of Time and Language Type in Reading Development for English Language Learners

    ERIC Educational Resources Information Center

    Betts, Joseph; Bolt, Sara; Decker, Dawn; Muyskens, Paul; Marston, Doug

    2009-01-01

    The purpose of this study was to examine the development of English reading achievement among English Language Learners (ELLs) and to determine whether the time that an ELL's family was in the United States and the type of native language spoken affected their reading development. Participants were 300 third-grade ELLs from two different native…

  6. Language acquisition for deaf children: Reducing the harms of zero tolerance to the use of alternative approaches.

    PubMed

    Humphries, Tom; Kushalnagar, Poorna; Mathur, Gaurav; Napoli, Donna Jo; Padden, Carol; Rathmann, Christian; Smith, Scott R

    2012-04-02

    Children acquire language without instruction as long as they are regularly and meaningfully engaged with an accessible human language. Today, 80% of children born deaf in the developed world are implanted with cochlear devices that allow some of them access to sound in their early years, which helps them to develop speech. However, because of brain plasticity changes during early childhood, children who have not acquired a first language in the early years might never be completely fluent in any language. If they miss this critical period for exposure to a natural language, their subsequent development of the cognitive activities that rely on a solid first language might be underdeveloped, such as literacy, memory organization, and number manipulation. An alternative to speech-exclusive approaches to language acquisition exists in the use of sign languages such as American Sign Language (ASL), where acquiring a sign language is subject to the same time constraints of spoken language development. Unfortunately, so far, these alternatives are caught up in an "either - or" dilemma, leading to a highly polarized conflict about which system families should choose for their children, with little tolerance for alternatives by either side of the debate and widespread misinformation about the evidence and implications for or against either approach. The success rate with cochlear implants is highly variable. This issue is still debated, and as far as we know, there are no reliable predictors for success with implants. Yet families are often advised not to expose their child to sign language. Here absolute positions based on ideology create pressures for parents that might jeopardize the real developmental needs of deaf children. What we do know is that cochlear implants do not offer accessible language to many deaf children. By the time it is clear that the deaf child is not acquiring spoken language with cochlear devices, it might already be past the critical period, and

  7. The Locus Equation as an Index of Coarticulation in Syllables Produced by Speakers with Profound Hearing Loss

    ERIC Educational Resources Information Center

    McCaffrey Morrison, Helen

    2008-01-01

    Locus equations (LEs) were derived from consonant-vowel-consonant (CVC) syllables produced by four speakers with profound hearing loss. Group data indicated that LE functions obtained for the separate CVC productions initiated by /b/, /d/, and /g/ were less well-separated in acoustic space than those obtained from speakers with normal hearing. A…

  8. Language-Independent and Language-Specific Aspects of Early Literacy: An Evaluation of the Common Underlying Proficiency Model.

    PubMed

    Goodrich, J Marc; Lonigan, Christopher J

    2017-08-01

    According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying proficiency model for the early literacy skills of Spanish-speaking language-minority children using confirmatory factor analysis. Eight hundred fifty-eight Spanish-speaking language-minority preschoolers (mean age = 60.83 months, 50.2% female) participated in this study. Results indicated that bifactor models that consisted of language-independent as well as language-specific early literacy factors provided the best fits to the data for children's phonological awareness and print knowledge skills. Correlated factors models that only included skills specific to Spanish and English provided the best fits to the data for children's oral language skills. Children's language-independent early literacy skills were significantly related across constructs and to language-specific aspects of early literacy. Language-specific aspects of early literacy skills were significantly related within but not across languages. These findings suggest that language-minority preschoolers have a common underlying proficiency for code-related skills but not language-related skills that may allow them to transfer knowledge across languages.

  9. A Study in Content Language Acquisition

    ERIC Educational Resources Information Center

    Broer, Kathleen

    2003-01-01

    This study examines how young second language learners acquire academic language. Among the main language groups represented were Punjabi, Hindi, Tamil, Estonian, Serbian, Arabic as well as 23 other language groups. I monitored over 75 students, in grades 1, 2 and 4. I was interested in exploring what strategies best promoted coherence in…

  10. Mental time travel and the shaping of language.

    PubMed

    Corballis, Michael C

    2009-01-01

    Episodic memory can be regarded as part of a more general system, unique to humans, for mental time travel, and the construction of future episodes. This allows more detailed planning than is afforded by the more general mechanisms of instinct, learning, and semantic memory. To be useful, episodic memory need not provide a complete or even a faithful record of past events, and may even be part of a process whereby we construct fictional accounts. The properties of language are aptly designed for the communication and sharing of episodes, and for the telling of stories; these properties include symbolic representation of the elements of real-world events, time markers, and combinatorial rules. Language and mental time travel probably co-evolved during the Pleistocene, when brain size increased dramatically.

  11. Language performance and auditory evoked fields in 2- to 5-year-old children.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Shitamichi, Kiyomi; Ueno, Sanae; Remijn, Gerard B; Haruta, Yasuhiro; Oi, Manabu; Munesue, Toshio; Tsubokawa, Tsunehisa; Higashida, Haruhiro; Minabe, Yoshio

    2012-02-01

    Language development progresses at a dramatic rate in preschool children. As rapid temporal processing of speech signals is important in daily colloquial environments, we performed magnetoencephalography (MEG) to investigate the linkage between speech-evoked responses during rapid-rate stimulus presentation (interstimulus interval < 1 s) and language performance in 2- to 5-year-old children (n = 59). Our results indicated that syllables with this short stimulus interval evoked detectable P50m, but not N100m, in most participants, indicating a marked influence of longer neuronal refractory period for stimulation. The results of equivalent dipole estimation showed that the intensity of the P50m component in the left hemisphere was positively correlated with language performance (conceptual inference ability). The observed positive correlations were suggested to reflect the maturation of synaptic organisation or axonal maturation and myelination underlying the acquisition of linguistic abilities. The present study is among the first to use MEG to study brain maturation pertaining to language abilities in preschool children. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  12. Automatic detection of Parkinson's disease in running speech spoken in three different languages.

    PubMed

    Orozco-Arroyave, J R; Hönig, F; Arias-Londoño, J D; Vargas-Bonilla, J F; Daqrouq, K; Skodda, S; Rusz, J; Nöth, E

    2016-01-01

    The aim of this study is the analysis of continuous speech signals of people with Parkinson's disease (PD) considering recordings in different languages (Spanish, German, and Czech). A method for the characterization of the speech signals, based on the automatic segmentation of utterances into voiced and unvoiced frames, is addressed here. The energy content of the unvoiced sounds is modeled using 12 Mel-frequency cepstral coefficients and 25 bands scaled according to the Bark scale. Four speech tasks comprising isolated words, rapid repetition of the syllables /pa/-/ta/-/ka/, sentences, and read texts are evaluated. The method proves to be more accurate than classical approaches in the automatic classification of speech of people with PD and healthy controls. The accuracies range from 85% to 99% depending on the language and the speech task. Cross-language experiments are also performed confirming the robustness and generalization capability of the method, with accuracies ranging from 60% to 99%. This work comprises a step forward for the development of computer aided tools for the automatic assessment of dysarthric speech signals in multiple languages.

  13. Speed-difficulty trade-off in speech: Chinese versus English

    PubMed Central

    Sun, Yao; Latash, Elizaveta M.; Mikaelian, Irina L.

    2011-01-01

    This study continues the investigation of the previously described speed-difficulty trade-off in picture description tasks. In particular, we tested a hypothesis that the Mandarin Chinese and American English are similar in showing logarithmic dependences between speech time and index of difficulty (ID), while they differ significantly in the amount of time needed to describe simple pictures, this difference increases for more complex pictures, and it is associated with a proportional difference in the number of syllables used. Subjects (eight Chinese speakers and eight English speakers) were tested in pairs. One subject (the Speaker) described simple pictures, while the other subject (the Performer) tried to reproduce the pictures based on the verbal description as quickly as possible with a set of objects. The Chinese speakers initiated speech production significantly faster than the English speakers. Speech time scaled linearly with ln(ID) in all subjects, but the regression coefficient was significantly higher in the English speakers as compared with the Chinese speakers. The number of errors was somewhat lower in the Chinese participants (not significantly). The Chinese pairs also showed a shorter delay between the initiation of speech and initiation of action by the Performer, shorter movement time by the Performer, and shorter overall performance time. The number of syllables scaled with ID, and the Chinese speakers used significantly smaller numbers of syllables. Speech rate was comparable between the two groups, about 3 syllables/s; it dropped for more complex pictures (higher ID). When asked to reproduce the same pictures without speaking, movement time scaled linearly with ln(ID); the Chinese performers were slower than the English performers. We conclude that natural languages show a speed-difficulty trade-off similar to Fitts’ law; the trade-offs in movement and speech production are likely to originate at a cognitive level. The time advantage of the

  14. Automatic processing of tones and speech stimuli in children with specific language impairment.

    PubMed

    Uwer, Ruth; Albrecht, Ronald; von Suchodoletz, W

    2002-08-01

    It is well known from behavioural experiments that children with specific language impairment (SLI) have difficulties discriminating consonant-vowel (CV) syllables such as /ba/, /da/, and /ga/. Mismatch negativity (MMN) is an auditory event-related potential component that represents the outcome of an automatic comparison process. It could, therefore, be a promising tool for assessing central auditory processing deficits for speech and non-speech stimuli in children with SLI. MMN is typically evoked by occasionally occurring 'deviant' stimuli in a sequence of identical 'standard' sounds. In this study MMN was elicited using simple tone stimuli, which differed in frequency (1000 versus 1200 Hz) and duration (175 versus 100 ms) and to digitized CV syllables which differed in place of articulation (/ba/, /da/, and /ga/) in children with expressive and receptive SLI and healthy control children (n=21 in each group, 46 males and 17 females; age range 5 to 10 years). Mean MMN amplitudes between groups were compared. Additionally, the behavioural discrimination performance was assessed. Children with SLI had attenuated MMN amplitudes to speech stimuli, but there was no significant difference between the two diagnostic subgroups. MMN to tone stimuli did not differ between the groups. Children with SLI made more errors in the discrimination task, but discrimination scores did not correlate with MMN amplitudes. The present data suggest that children with SLI show a specific deficit in automatic discrimination of CV syllables differing in place of articulation, whereas the processing of simple tone differences seems to be unimpaired.

  15. Language and Recursion

    NASA Astrophysics Data System (ADS)

    Lowenthal, Francis

    2010-11-01

    This paper examines whether the recursive structure imbedded in some exercises used in the Non Verbal Communication Device (NVCD) approach is actually the factor that enables this approach to favor language acquisition and reacquisition in the case of children with cerebral lesions. For that a definition of the principle of recursion as it is used by logicians is presented. The two opposing approaches to the problem of language development are explained. For many authors such as Chomsky [1] the faculty of language is innate. This is known as the Standard Theory; the other researchers in this field, e.g. Bates and Elman [2], claim that language is entirely constructed by the young child: they thus speak of Language Acquisition. It is also shown that in both cases, a version of the principle of recursion is relevant for human language. The NVCD approach is defined and the results obtained in the domain of language while using this approach are presented: young subjects using this approach acquire a richer language structure or re-acquire such a structure in the case of cerebral lesions. Finally it is shown that exercises used in this framework imply the manipulation of recursive structures leading to regular grammars. It is thus hypothesized that language development could be favored using recursive structures with the young child. It could also be the case that the NVCD like exercises used with children lead to the elaboration of a regular language, as defined by Chomsky [3], which could be sufficient for language development but would not require full recursion. This double claim could reconcile Chomsky's approach with psychological observations made by adherents of the Language Acquisition approach, if it is confirmed by researches combining the use of NVCDs, psychometric methods and the use of Neural Networks. This paper thus suggests that a research group oriented towards this problematic should be organized.

  16. The phonological abilities of Cantonese-speaking children with hearing loss.

    PubMed

    Dodd, B J; So, L K

    1994-06-01

    Little is known about the acquisition of phonology by children with hearing loss who learn languages other than English. In this study, the phonological abilities of 12 Cantonese-speaking children (ages 4:2 to 6:11) with prelingual hearing impairment are described. All but 3 children had almost complete syllable-initial consonant repertoires; all but 2 had complete syllable-final consonant and vowel repertoires; and only 1 child failed to produce all nine tones. Children's perception of single words was assessed using sets of words that included tone, consonant, and semantic distractors. Although the performance of the subjects was not age appropriate, they nevertheless most often chose the target, with most errors observed for the tone distractor. The phonological rules used included those that characterize the speech of younger hearing children acquiring Cantonese (e.g., cluster reduction, stopping, and deaspiration). However, most children also used at least one unusual phonological rule (e.g., frication, addition, initial consonant deletion, and/or backing). These rules are common in the speech of Cantonese-speaking children diagnosed as phonologically disordered. The influence of the ambient language on children's patterns of phonological errors is discussed.

  17. Serious Use of a Serious Game for Language Learning

    ERIC Educational Resources Information Center

    Johnson, W. Lewis

    2010-01-01

    The Tactical Language and Culture Training System (TLCTS) helps learners acquire basic communicative skills in foreign languages and cultures. Learners acquire communication skills through a combination of interactive lessons and serious games. Artificial intelligence plays multiple roles in this learning environment: to process the learner's…

  18. Spanish Language Processing at University of Maryland: Building Infrastructure for Multilingual Applications

    DTIC Science & Technology

    2001-09-01

    translation of the Spanish original sentence. Acquiring bilingual dictionary entries In addition to building and applying the more sophisticated LCS...porting LCS lexicons to new languages, as described above, and are also useful by themselves in improving dictionary -based cross language information...hold much of the time. Moreover, lexical dependencies have proven to be instrumental in advances in monolingual syntactic analysis (e.g. I-erg MY

  19. Acquiring the Language of Learning: The Performance of Hawaiian Preschool Children on the Preschool Language Assessment Instrument (PLAI).

    ERIC Educational Resources Information Center

    Martini, Mary

    The Preschool Language Assessment Instrument (PLAI) was designed as a diagnostic tool for 3- to 6-year-old children to assess children's abilities to use language to solve thinking problems typically posed by teachers. The PLAI was developed after observing middle-class teachers in preschool classrooms encourage children to use language in…

  20. Research in Second Language Acquisition: Selected Papers of the Los Angeles Second Language Acquisition Research Forum. Issues In Second Language Research.

    ERIC Educational Resources Information Center

    Scarcella, Robin C., Ed.; Krashen, Stephen D., Ed.

    The following papers are included: (1) "The Theoretical and Practical Relevance of Simple Codes in Second Language Acquisition" (Krashen); (2) "Talking to Foreigners versus Talking to Children: Similarities and Differences" (Freed); (3) "The Levertov Machine" (Stevick); (4) "Acquiring a Second Language when You're Not the Underdog" (Edelsky and…

  1. Rhyming Words and Onset-Rime Constituents: An Inquiry into Structural Breaking Points and Emergent Boundaries in the Syllable

    ERIC Educational Resources Information Center

    Geudens, Astrid; Sandra, Dominiek; Martensen, Heike

    2005-01-01

    Geudens and Sandra, in their 2003 study, investigated the special role of onsets and rimes in Dutch-speaking children's explicit phonological awareness. In the current study, we tapped implicit phonological knowledge using forced-choice similarity judgment (Experiment 1) and recall of syllable lists (Experiment 2). In Experiment 1, Dutch-speaking…

  2. Growth of language-related brain areas after foreign language learning.

    PubMed

    Mårtensson, Johan; Eriksson, Johan; Bodammer, Nils Christian; Lindgren, Magnus; Johansson, Mikael; Nyberg, Lars; Lövdén, Martin

    2012-10-15

    The influence of adult foreign-language acquisition on human brain organization is poorly understood. We studied cortical thickness and hippocampal volumes of conscript interpreters before and after three months of intense language studies. Results revealed increases in hippocampus volume and in cortical thickness of the left middle frontal gyrus, inferior frontal gyrus, and superior temporal gyrus for interpreters relative to controls. The right hippocampus and the left superior temporal gyrus were structurally more malleable in interpreters acquiring higher proficiency in the foreign language. Interpreters struggling relatively more to master the language displayed larger gray matter increases in the middle frontal gyrus. These findings confirm structural changes in brain regions known to serve language functions during foreign-language acquisition. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Learning multiple rules simultaneously: Affixes are more salient than reduplications.

    PubMed

    Gervain, Judit; Endress, Ansgar D

    2017-04-01

    Language learners encounter numerous opportunities to learn regularities, but need to decide which of these regularities to learn, because some are not productive in their native language. Here, we present an account of rule learning based on perceptual and memory primitives (Endress, Dehaene-Lambertz, & Mehler, Cognition, 105(3), 577-614, 2007; Endress, Nespor, & Mehler, Trends in Cognitive Sciences, 13(8), 348-353, 2009), suggesting that learners preferentially learn regularities that are more salient to them, and that the pattern of salience reflects the frequency of language features across languages. We contrast this view with previous artificial grammar learning research, which suggests that infants "choose" the regularities they learn based on rational, Bayesian criteria (Frank & Tenenbaum, Cognition, 120(3), 360-371, 2013; Gerken, Cognition, 98(3)B67-B74, 2006, Cognition, 115(2), 362-366, 2010). In our experiments, adult participants listened to syllable strings starting with a syllable reduplication and always ending with the same "affix" syllable, or to syllable strings starting with this "affix" syllable and ending with the "reduplication". Both affixation and reduplication are frequently used for morphological marking across languages. We find three crucial results. First, participants learned both regularities simultaneously. Second, affixation regularities seemed easier to learn than reduplication regularities. Third, regularities in sequence offsets were easier to learn than regularities at sequence onsets. We show that these results are inconsistent with previous Bayesian rule learning models, but mesh well with the perceptual or memory primitives view. Further, we show that the pattern of salience revealed in our experiments reflects the distribution of regularities across languages. Ease of acquisition might thus be one determinant of the frequency of regularities across languages.

  4. The Frame Constraint on Experimentally Elicited Speech Errors in Japanese.

    PubMed

    Saito, Akie; Inoue, Tomoyoshi

    2017-06-01

    The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as "mora" is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.

  5. Foreign Languages and Your Career.

    ERIC Educational Resources Information Center

    Bourgoin, Edward

    Divided into two major parts, this book is intended to indicate careers in which people need foreign languages in their work and to provide suggestions and sources of further information for those who already have foreign language skills and those who are planning to acquire them. Part 1 discusses careers in which a foreign language is needed as a…

  6. A Compiler and Run-time System for Network Programming Languages

    DTIC Science & Technology

    2012-01-01

    A Compiler and Run-time System for Network Programming Languages Christopher Monsanto Princeton University Nate Foster Cornell University Rob...Foster, R. Harrison, M. Freedman, C. Monsanto , J. Rexford, A. Story, and D. Walker. Frenetic: A network programming language. In ICFP, Sep 2011. [10] A

  7. Language time series analysis

    NASA Astrophysics Data System (ADS)

    Kosmidis, Kosmas; Kalampokis, Alkiviadis; Argyrakis, Panos

    2006-10-01

    We use the detrended fluctuation analysis (DFA) and the Grassberger-Proccacia analysis (GP) methods in order to study language characteristics. Despite that we construct our signals using only word lengths or word frequencies, excluding in this way huge amount of information from language, the application of GP analysis indicates that linguistic signals may be considered as the manifestation of a complex system of high dimensionality, different from random signals or systems of low dimensionality such as the Earth climate. The DFA method is additionally able to distinguish a natural language signal from a computer code signal. This last result may be useful in the field of cryptography.

  8. The effect of native-language experience on the sensory-obligatory components, the P1–N1–P2 and the T-complex

    PubMed Central

    Wagner, Monica; Shafer, Valerie L.; Martin, Brett; Steinschneider, Mitchell

    2013-01-01

    The influence of native-language experience on sensory-obligatory auditory-evoked potentials (AEPs) was investigated in native-English and native-Polish listeners. AEPs were recorded to the first word in nonsense word pairs, while participants performed a syllable identification task to the second word in the pairs. Nonsense words contained phoneme sequence onsets (i.e., /pt/, /pət/, /st/ and /sət/) that occur in the Polish and English languages, with the exception that /pt/ at syllable onset is an illegal phonotactic form in English. P1–N1–P2 waveforms from fronto-central electrode sites were comparable in English and Polish listeners, even though, these same English participants were unable to distinguish the nonsense words having /pt/ and /pət/ onsets. The P1–N1–P2 complex indexed the temporal characteristics of the word stimuli in the same manner for both language groups. Taken together, these findings suggest that the fronto-central P1–N1–P2 complex reflects acoustic feature processing of speech and is not significantly influenced by exposure to the phoneme sequences of the native-language. In contrast, the T-complex from bilateral posterior temporal sites was found to index phonological as well as acoustic feature processing to the nonsense word stimuli. An enhanced negativity for the /pt/ cluster relative to its contrast sequence (i.e., /pət/) occurred only for the Polish listeners, suggesting that neural networks within non-primary auditory cortex may be involved in early cortical phonological processing. PMID:23643857

  9. Spatial Language Facilitates Spatial Cognition: Evidence from Children Who Lack Language Input

    ERIC Educational Resources Information Center

    Gentner, Dedre; Ozyurek, Asli; Gurcanli, Ozge; Goldin-Meadow, Susan

    2013-01-01

    Does spatial language influence how people think about space? To address this question, we observed children who did not know a conventional language, and tested their performance on nonlinguistic spatial tasks. We studied deaf children living in Istanbul whose hearing losses prevented them from acquiring speech and whose hearing parents had not…

  10. Stuttering in English-Mandarin bilingual speakers: the influence of language dominance on stuttering severity.

    PubMed

    Lim, Valerie P C; Lincoln, Michelle; Chan, Yiong Huak; Onslow, Mark

    2008-12-01

    English and Mandarin are the 2 most spoken languages in the world, yet it is not known how stuttering manifests in English-Mandarin bilinguals. In this research, the authors investigated whether the severity and type of stuttering is different in English and Mandarin in English-Mandarin bilinguals, and whether this difference was influenced by language dominance. Thirty English-Mandarin bilinguals who stutter (BWS), ages 12-44 years, were categorized into 3 groups (15 English-dominant, 4 Mandarin-dominant, and 11 balanced bilinguals) using a self-report classification tool. Three 10-min conversations in English and Mandarin were assessed by 2 English-Mandarin bilingual clinicians for percent syllables stuttered (%SS), perceived stuttering severity (SEV), and types of stuttering behaviors using the Lidcombe Behavioral Data Language (LBDL; Packman & Onslow, 1998; Teesson, Packman, & Onslow, 2003). English-dominant and Mandarin-dominant BWS exhibited higher %SS and SEV scores in their less dominant language, whereas the scores for the balanced bilinguals were similar for both languages. The difference in the percentage of stutters per LBDL category between English and Mandarin was not markedly different for any bilingual group. Language dominance appeared to influence the severity but not the types of stuttering behaviors in BWS. Clinicians working with BWS need to assess language dominance when diagnosing stuttering severity in bilingual clients.

  11. Listeners feel the beat: entrainment to English and French speech rhythms.

    PubMed

    Lidji, Pascale; Palmer, Caroline; Peretz, Isabelle; Morningstar, Michele

    2011-12-01

    Can listeners entrain to speech rhythms? Monolingual speakers of English and French and balanced English-French bilinguals tapped along with the beat they perceived in sentences spoken in a stress-timed language, English, and a syllable-timed language, French. All groups of participants tapped more regularly to English than to French utterances. Tapping performance was also influenced by the participants' native language: English-speaking participants and bilinguals tapped more regularly and at higher metrical levels than did French-speaking participants, suggesting that long-term linguistic experience with a stress-timed language can differentiate speakers' entrainment to speech rhythm.

  12. Segmental transition of the first syllables of words in Japanese children who stutter: Comparison between word and sentence production.

    PubMed

    Matsumoto, Sachiyo; Ito, Tomohiko

    2016-01-01

    Matsumoto-Shimamori, Ito, Fukuda, & Fukuda (2011) proposed the hypothesis that in Japanese, the transition from the core vowels (i.e. syllable nucleus) of the first syllables of words to the following segments affected the occurrence of stuttering. Moreover, in this transition position, an inter-syllabic transition precipitated more stuttering than an intra-syllabic one (Shimamori & Ito, 2007, 2008). However, these studies have only used word production tasks. The purpose of this study was to investigate whether the same results could be obtained in sentence production tasks. Participants were 28 Japanese school-age children who stutter, ranging in age from 7;3 to 12;7. The frequency of stuttering on words with an inter-syllabic transition was significantly higher than on those having an intra-syllabic transition, not only in isolated words but in the first words of sentences. These results suggested that Matsumoto et al.'s hypothesis could be applicable to the results of sentence production tasks.

  13. Chinese EFL teachers' knowledge of basic language constructs and their self-perceived teaching abilities.

    PubMed

    Zhao, Jing; Joshi, R Malatesha; Dixon, L Quentin; Huang, Liyan

    2016-04-01

    The present study examined the knowledge and skills of basic language constructs among elementary school teachers who were teaching English as a Foreign Language (EFL) in China. Six hundred and thirty in-service teachers completed the adapted Reading Teacher Knowledge Survey. Survey results showed that English teachers' self-perceived ability to teach vocabulary was the highest and self-perceived ability to teach reading to struggling readers was the lowest. Morphological knowledge was positively correlated with teachers' self-perceived teaching abilities, and it contributed unique variance even after controlling for the effects of ultimate educational attainment and years of teaching. Findings suggest that elementary school EFL teachers in China, on average, were able to display implicit skills related to certain basic language constructs, but less able to demonstrate explicit knowledge of other skills, especially sub-lexical units (e.g., phonemic awareness and morphemes). The high self-perceived ability of teaching vocabulary and high scores on syllable counting reflected the focus on larger units in the English reading curriculum.

  14. Space-Time, Phenomenology, and the Picture Theory of Language

    NASA Astrophysics Data System (ADS)

    Grelland, Hans Herlof

    To estimate Minkowski's introduction of space-time in relativity, the case is made for the view that abstract language and mathematics carries meaning not only by its connections with observation but as pictures of facts. This view is contrasted to the more traditional intuitionism of Hume, Mach, and Husserl. Einstein's attempt at a conceptual reconstruction of space and time as well as Husserl's analysis of the loss of meaning in science through increasing abstraction is analysed. Wittgenstein's picture theory of language is used to explain how meaning is conveyed by abstract expressions, with the Minkowski space as a case.

  15. SyllabO+: A new tool to study sublexical phenomena in spoken Quebec French.

    PubMed

    Bédard, Pascale; Audet, Anne-Marie; Drouin, Patrick; Roy, Johanna-Pascale; Rivard, Julie; Tremblay, Pascale

    2017-10-01

    Sublexical phonotactic regularities in language have a major impact on language development, as well as on speech processing and production throughout the entire lifespan. To understand the impact of phonotactic regularities on speech and language functions at the behavioral and neural levels, it is essential to have access to oral language corpora to study these complex phenomena in different languages. Yet, probably because of their complexity, oral language corpora remain less common than written language corpora. This article presents the first corpus and database of spoken Quebec French syllables and phones: SyllabO+. This corpus contains phonetic transcriptions of over 300,000 syllables (over 690,000 phones) extracted from recordings of 184 healthy adult native Quebec French speakers, ranging in age from 20 to 97 years. To ensure the representativeness of the corpus, these recordings were made in both formal and familiar communication contexts. Phonotactic distributional statistics (e.g., syllable and co-occurrence frequencies, percentages, percentile ranks, transition probabilities, and pointwise mutual information) were computed from the corpus. An open-access online application to search the database was developed, and is available at www.speechneurolab.ca/syllabo . In this article, we present a brief overview of the corpus, as well as the syllable and phone databases, and we discuss their practical applications in various fields of research, including cognitive neuroscience, psycholinguistics, neurolinguistics, experimental psychology, phonetics, and phonology. Nonacademic practical applications are also discussed, including uses in speech-language pathology.

  16. Language acquisition and use: learning and applying probabilistic constraints.

    PubMed

    Seidenberg, M S

    1997-03-14

    What kinds of knowledge underlie the use of language and how is this knowledge acquired? Linguists equate knowing a language with knowing a grammar. Classic "poverty of the stimulus" arguments suggest that grammar identification is an intractable inductive problem and that acquisition is possible only because children possess innate knowledge of grammatical structure. An alternative view is emerging from studies of statistical and probabilistic aspects of language, connectionist models, and the learning capacities of infants. This approach emphasizes continuity between how language is acquired and how it is used. It retains the idea that innate capacities constrain language learning, but calls into question whether they include knowledge of grammatical structure.

  17. "The Craft so Long to Lerne": Aspects of Time in Language Learning

    ERIC Educational Resources Information Center

    Oxford, Rebecca L.

    2017-01-01

    Time factors complexly, dynamically interact with each other and with other contextualized variables in language learning. The time-tied nature of language learning is captured in what I call the "time-prism," which is the central symbol of temporality in this paper. The facets of the prism discussed in this article are (1) language…

  18. Does language shape thought? Mandarin and English speakers' conceptions of time.

    PubMed

    Boroditsky, L

    2001-08-01

    Does the language you speak affect how you think about the world? This question is taken up in three experiments. English and Mandarin talk about time differently--English predominantly talks about time as if it were horizontal, while Mandarin also commonly describes time as vertical. This difference between the two languages is reflected in the way their speakers think about time. In one study, Mandarin speakers tended to think about time vertically even when they were thinking for English (Mandarin speakers were faster to confirm that March comes earlier than April if they had just seen a vertical array of objects than if they had just seen a horizontal array, and the reverse was true for English speakers). Another study showed that the extent to which Mandarin-English bilinguals think about time vertically is related to how old they were when they first began to learn English. In another experiment native English speakers were taught to talk about time using vertical spatial terms in a way similar to Mandarin. On a subsequent test, this group of English speakers showed the same bias to think about time vertically as was observed with Mandarin speakers. It is concluded that (1) language is a powerful tool in shaping thought about abstract domains and (2) one's native language plays an important role in shaping habitual thought (e.g., how one tends to think about time) but does not entirely determine one's thinking in the strong Whorfian sense. Copyright 2001 Academic Press.

  19. Intraoperative Mapping of Expressive Language Cortex Using Passive Real-Time Electrocorticography

    DTIC Science & Technology

    2016-08-26

    lsev ie r .com/ locate /ebcrCase ReportIntraoperative mapping of expressive language cortex using passive real-time electrocorticographyAmiLyn M...case report, we investigated the utility and practicality of passive intraoperative functional mapping of expressive language cortex using high...expressive lan- guage regions. In preparation of tumor resection, the patient underwent multiple functional language mapping procedures. We examined

  20. Evidence for a Preserved Sensitivity to Orthographic Redundancy and an Impaired Access to Phonological Syllables in French Developmental Dyslexics

    ERIC Educational Resources Information Center

    Doignon-Camus, Nadège; Seigneuric, Alix; Perrier, Emeline; Sisti, Aurélie; Zagar, Daniel

    2013-01-01

    To evaluate the orthographic and phonological processing skills of developmental dyslexics, we (a) examined their abilities to exploit properties of orthographic redundancy and (b) tested whether their phonological deficit extends to spelling-to-sound connections for large-grain size units such as syllables. To assess the processing skills in…

  1. Cracking the Language Code: Neural Mechanisms Underlying Speech Parsing

    PubMed Central

    McNealy, Kristin; Mazziotta, John C.; Dapretto, Mirella

    2013-01-01

    Word segmentation, detecting word boundaries in continuous speech, is a critical aspect of language learning. Previous research in infants and adults demonstrated that a stream of speech can be readily segmented based solely on the statistical and speech cues afforded by the input. Using functional magnetic resonance imaging (fMRI), the neural substrate of word segmentation was examined on-line as participants listened to three streams of concatenated syllables, containing either statistical regularities alone, statistical regularities and speech cues, or no cues. Despite the participants’ inability to explicitly detect differences between the speech streams, neural activity differed significantly across conditions, with left-lateralized signal increases in temporal cortices observed only when participants listened to streams containing statistical regularities, particularly the stream containing speech cues. In a second fMRI study, designed to verify that word segmentation had implicitly taken place, participants listened to trisyllabic combinations that occurred with different frequencies in the streams of speech they just heard (“words,” 45 times; “partwords,” 15 times; “nonwords,” once). Reliably greater activity in left inferior and middle frontal gyri was observed when comparing words with partwords and, to a lesser extent, when comparing partwords with nonwords. Activity in these regions, taken to index the implicit detection of word boundaries, was positively correlated with participants’ rapid auditory processing skills. These findings provide a neural signature of on-line word segmentation in the mature brain and an initial model with which to study developmental changes in the neural architecture involved in processing speech cues during language learning. PMID:16855090

  2. Cognitive Rehabilitation for Children with Acquired Brain Injury

    ERIC Educational Resources Information Center

    Slomine, Beth; Locascio, Gianna

    2009-01-01

    Cognitive deficits are frequent consequences of acquired brain injury (ABI) and often require intervention. We review the theoretical and empirical literature on cognitive rehabilitation in a variety of treatment domains including attention, memory, unilateral neglect, speech and language, executive functioning, and family involvement/education.…

  3. Intraoperative language localization in multilingual patients with gliomas.

    PubMed

    Bello, Lorenzo; Acerbi, Francesco; Giussani, Carlo; Baratta, Pietro; Taccone, Paolo; Songa, Valeria; Fava, Marica; Stocchetti, Nino; Papagno, Costanza; Gaini, Sergio M

    2006-07-01

    Intraoperative localization of speech is problematic in patients who are fluent in different languages. Previous studies have generated various results depending on the series of patients studied, the type of language, and the sensitivity of the tasks applied. It is not clear whether languages are mediated by multiple and separate cortical areas or shared by common areas. Globally considered, previous studies recommended performing a multiple intraoperative mapping for all the languages in which the patient is fluent. The aim of this work was to study the feasibility of performing an intraoperative multiple language mapping in a group of multilingual patients with a glioma undergoing awake craniotomy for tumor removal and to describe the intraoperative cortical and subcortical findings in the area of craniotomy, with the final goal to maximally preserve patients' functional language. Seven late, highly proficient multilingual patients with a left frontal glioma were submitted preoperatively to a battery of tests to evaluate oral language production, comprehension, and repetition. Each language was tested serially starting from the first acquired language. Items that were correctly named during these tests were used to build personalized blocks to be used intraoperatively. Language mapping was undertaken during awake craniotomies by the use of an Ojemann cortical stimulator during counting and oral naming tasks. Subcortical stimulation by using the same current threshold was applied during tumor resection, in a back and forth fashion, and the same tests. Cortical sites essential for oral naming were found in 87.5% of patients, those for the first acquired language in one to four sites, those for the other languages in one to three sites. Sites for each language were distinct and separate. Number and location of sites were not predictable, being randomly and widely distributed in the cortex around or less frequently over the tumor area. Subcortical stimulations found

  4. Acquired dyslexia in a Turkish-English speaker.

    PubMed

    Raman, Ilhan; Weekes, Brendan S

    2005-06-01

    The Turkish script is characterised by completely transparent bidirectional mappings between orthography and phonology. To date, there has been no reported evidence of acquired dyslexia in Turkish speakers leading to the naïve view that reading and writing problems in Turkish are probably rare. We examined the extent to which phonological impairment and orthographic transparency influence reading disorders in a native Turkish speaker. BRB is a bilingual Turkish-English speaker with deep dysphasia accompanied by acquired dyslexia in both languages. The main findings are an effect of imageability on reading in Turkish coincident with surface dyslexia in English and preserved nonword reading. BRB's acquired dyslexia suggests that damage to phonological representations might have a consequence for learning to read in Turkish. We argue that BRB's acquired dyslexia has a common locus in chronic underactivation of phonological representations in Turkish and English. Despite a common locus, reading problems manifest themselves differently according to properties of the script and the type of task.

  5. Early Bimodal Stimulation Benefits Language Acquisition for Children With Cochlear Implants.

    PubMed

    Moberly, Aaron C; Lowenstein, Joanna H; Nittrouer, Susan

    2016-01-01

    Adding a low-frequency acoustic signal to the cochlear implant (CI) signal (i.e., bimodal stimulation) for a period of time early in life improves language acquisition. Children must acquire sensitivity to the phonemic units of language to develop most language-related skills, including expressive vocabulary, working memory, and reading. Acquiring sensitivity to phonemic structure depends largely on having refined spectral (frequency) representations available in the signal, which does not happen with CIs alone. Combining the low-frequency acoustic signal available through hearing aids with the CI signal can enhance signal quality. A period with this bimodal stimulation has been shown to improve language skills in very young children. This study examined whether these benefits persist into childhood. Data were examined for 48 children with CIs implanted under age 3 years, participating in a longitudinal study. All children wore hearing aids before receiving a CI, but upon receiving a first CI, 24 children had at least 1 year of bimodal stimulation (Bimodal group), and 24 children had only electric stimulation subsequent to implantation (CI-only group). Measures of phonemic awareness were obtained at second and fourth grades, along with measures of expressive vocabulary, working memory, and reading. Children in the Bimodal group generally performed better on measures of phonemic awareness, and that advantage was reflected in other language measures. Having even a brief period of time early in life with combined electric-acoustic input provides benefits to language learning into childhood, likely because of the enhancement in spectral representations provided.

  6. Longitudinal Evaluation of Language Impairment in Youth With Perinatally Acquired Human Immunodeficiency Virus (HIV) and Youth With Perinatal HIV Exposure

    PubMed Central

    Redmond, Sean M.; Yao, Tzy-Jyun; Russell, Jonathan S.; Rice, Mabel L.; Hoffman, Howard J.; Siberry, George K.; Frederick, Toni; Purswani, Murli; Williams, Paige L.

    2016-01-01

    Background Language impairment (LI) risk is increased for perinatally acquired human immunodeficiency virus-infected (PHIV) and perinatally exposed to HIV but uninfected (PHEU) youth. This study evaluates the persistence of LI in these groups. Methods The Clinical Evaluation of Language Fundamentals was repeated on participants of the Pediatric HIV/AIDS Cohort Study Adolescent Master Protocol 18 months postbaseline. Regression models identified factors associated with change in standardized score (SC) and the resolution or development of LI. Results Of 319 participants, 112 had LI at baseline. Upon re-evaluation, SCs were highly stable and changes were similar in PHIV (n = 212) and PHEU (n = 107) participants. Those with family history of language delays had a 2.39 point lower mean increase in SCs than those without, after controlling for demographic and socioeconomic factors and baseline LI status. Among PHIV participants, CD4 count <350 cells/mm3 was associated with lower mean SC change (4.32 points), and exposure to combination antiretroviral therapy (cART) or protease inhibitors (PIs) was associated with a higher mean SC change (5.93 and 4.19 points, respectively). Initial LI was persistent in most cases (78%); 20 new cases occurred (10%). Female sex was associated with higher odds of LI resolution. Among PHIV, duration and baseline cART and history of PI use were associated with LI resolution; higher percentage of detectable viral loads before baseline was associated with lower odds of resolution. Conclusions The PHIV and PHEU youth are at risk for persistent LI, and family history of language delays was a risk factor for persistence of problems. Measures of successful HIV treatment predicted more favorable outcomes among PHIV youth. PMID:27856674

  7. Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition

    PubMed Central

    Rigoulot, Simon; Wassiliwizky, Eugen; Pell, Marc D.

    2013-01-01

    Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400–1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech. PMID:23805115

  8. Children, Language, and Literacy: Diverse Learners in Diverse Times. Language & Literacy Series

    ERIC Educational Resources Information Center

    Genishi, Celia; Dyson, Anne Haas

    2009-01-01

    In their new collaboration, Celia Genishi and Anne Haas Dyson celebrate the genius of young children as they learn language and literacy in our diverse times. Despite burgeoning sociocultural diversity, many early childhood classrooms (pre-K to grade 2) offer a one-size-fits-all curriculum in which learning is too often assessed by standardized…

  9. On Teaching Strategies in Second Language Acquisition

    ERIC Educational Resources Information Center

    Yang, Hong

    2008-01-01

    How to acquire a second language is a question of obvious importance to teachers and language learners, and how to teach a second language has also become a matter of concern to the linguists' interest in the nature of primary linguistic data. Starting with the development stages of second language acquisition and Stephen Krashen's theory, this…

  10. Implicit memory in music and language.

    PubMed

    Ettlinger, Marc; Margulis, Elizabeth H; Wong, Patrick C M

    2011-01-01

    Research on music and language in recent decades has focused on their overlapping neurophysiological, perceptual, and cognitive underpinnings, ranging from the mechanism for encoding basic auditory cues to the mechanism for detecting violations in phrase structure. These overlaps have most often been identified in musicians with musical knowledge that was acquired explicitly, through formal training. In this paper, we review independent bodies of work in music and language that suggest an important role for implicitly acquired knowledge, implicit memory, and their associated neural structures in the acquisition of linguistic or musical grammar. These findings motivate potential new work that examines music and language comparatively in the context of the implicit memory system.

  11. Implicit Memory in Music and Language

    PubMed Central

    Ettlinger, Marc; Margulis, Elizabeth H.; Wong, Patrick C. M.

    2011-01-01

    Research on music and language in recent decades has focused on their overlapping neurophysiological, perceptual, and cognitive underpinnings, ranging from the mechanism for encoding basic auditory cues to the mechanism for detecting violations in phrase structure. These overlaps have most often been identified in musicians with musical knowledge that was acquired explicitly, through formal training. In this paper, we review independent bodies of work in music and language that suggest an important role for implicitly acquired knowledge, implicit memory, and their associated neural structures in the acquisition of linguistic or musical grammar. These findings motivate potential new work that examines music and language comparatively in the context of the implicit memory system. PMID:21927608

  12. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    PubMed

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  13. A Report on the Development of Negation in English by a Second Language Learner--Some Implications.

    ERIC Educational Resources Information Center

    Milon, Jack

    The question asked in this paper is whether children below the age of puberty who acquire a second language within the cultural context of that language acquire it in anything resembling the same developmental order that native speakers of the language acquire it. A seven-year-old Japanese boy's development in English negation structure provides…

  14. A Concept for Run-Time Support of the Chapel Language

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A document presents a concept for run-time implementation of other concepts embodied in the Chapel programming language. (Now undergoing development, Chapel is intended to become a standard language for parallel computing that would surpass older such languages in both computational performance in the efficiency with which pre-existing code can be reused and new code written.) The aforementioned other concepts are those of distributions, domains, allocations, and access, as defined in a separate document called "A Semantic Framework for Domains and Distributions in Chapel" and linked to a language specification defined in another separate document called "Chapel Specification 0.3." The concept presented in the instant report is recognition that a data domain that was invented for Chapel offers a novel approach to distributing and processing data in a massively parallel environment. The concept is offered as a starting point for development of working descriptions of functions and data structures that would be necessary to implement interfaces to a compiler for transforming the aforementioned other concepts from their representations in Chapel source code to their run-time implementations.

  15. The Influence of Syllable Onset Complexity and Syllable Frequency on Speech Motor Control

    ERIC Educational Resources Information Center

    Riecker, Axel; Brendel, Bettina; Ziegler, Wolfram; Erb, Michael; Ackermann, Hermann

    2008-01-01

    Functional imaging studies have delineated a "minimal network for overt speech production," encompassing mesiofrontal structures (supplementary motor area, anterior cingulate gyrus), bilateral pre- and postcentral convolutions, extending rostrally into posterior parts of the inferior frontal gyrus (IFG) of the language-dominant hemisphere, left…

  16. Central auditory processing disorder (CAPD) in children with specific language impairment (SLI). Central auditory tests.

    PubMed

    Dlouha, Olga; Novak, Alexej; Vokral, Jan

    2007-06-01

    The aim of this project is to use central auditory tests for diagnosis of central auditory processing disorder (CAPD) in children with specific language impairment (SLI), in order to confirm relationship between speech-language impairment and central auditory processing. We attempted to establish special dichotic binaural tests in Czech language modified for younger children. Tests are based on behavioral audiometry using dichotic listening (different auditory stimuli that presented to each ear simultaneously). The experimental tasks consisted of three auditory measures (test 1-3)-dichotic listening of two-syllable words presented like binaural interaction tests. Children with SLI are unable to create simple sentences from two words that are heard separately but simultaneously. Results in our group of 90 pre-school children (6-7 years old) confirmed integration deficit and problems with quality of short-term memory. Average rate of success of children with specific language impairment was 56% in test 1, 64% in test 2 and 63% in test 3. Results of control group: 92% in test 1, 93% in test 2 and 92% in test 3 (p<0.001). Our results indicate the relationship between disorders of speech-language perception and central auditory processing disorders.

  17. Swahili speech development: preliminary normative data from typically developing pre-school children in Tanzania.

    PubMed

    Gangji, Nazneen; Pascoe, Michelle; Smouse, Mantoa

    2015-01-01

    Swahili is widely spoken in East Africa, but to date there are no culturally and linguistically appropriate materials available for speech-language therapists working in the region. The challenges are further exacerbated by the limited research available on the typical acquisition of Swahili phonology. To describe the speech development of 24 typically developing first language Swahili-speaking children between the ages of 3;0 and 5;11 years in Dar es Salaam, Tanzania. A cross-sectional design was used with six groups of four children in 6-month age bands. Single-word speech samples were obtained from each child using a set of culturally appropriate pictures designed to elicit all consonants and vowels of Swahili. Each child's speech was audio-recorded and phonetically transcribed using International Phonetic Alphabet (IPA) conventions. Children's speech development is described in terms of (1) phonetic inventory, (2) syllable structure inventory, (3) phonological processes and (4) percentage consonants correct (PCC) and percentage vowels correct (PVC). Results suggest a gradual progression in the acquisition of speech sounds and syllables between the ages of 3;0 and 5;11 years. Vowel acquisition was completed and most of the consonants acquired by age 3;0. Fricatives/z, s, h/ were later acquired at 4 years and /θ/and /r/ were the last acquired consonants at age 5;11. Older children were able to produce speech sounds more accurately and had fewer phonological processes in their speech than younger children. Common phonological processes included lateralization and sound preference substitutions. The study contributes a preliminary set of normative data on speech development of Swahili-speaking children. Findings are discussed in relation to theories of phonological development, and may be used as a basis for further normative studies with larger numbers of children and ultimately the development of a contextually relevant assessment of the phonology of Swahili

  18. Language Education: Past, Present and Future

    ERIC Educational Resources Information Center

    Krashen, Stephen

    2008-01-01

    The recent past in language teaching has been dominated by the Skill-Building Hypothesis, the view that we learn language by first learning about it, and then practicing the rules we learned in output. The present is marked by the emergence of the Comprehension Hypothesis, the view that we acquire language when we understand messages, and is also…

  19. Recent language experience influences cross-language activation in bilinguals with different scripts

    PubMed Central

    Li, Chuchu; Wang, Min; Lin, Candise Y

    2017-01-01

    Purpose This study aimed to examine whether the phonological information in the non-target language is activated and its influence on bilingual processing. Approach Using the Stroop paradigm, Mandarin-English bilinguals named the ink color of Chinese characters in English in Experiment 1 and named the Chinese characters in addition to the color naming in English in Experiment 2. Twenty-four participants were recruited in each experiment. In both experiments, the visual stimuli included color characters (e.g. 红, hong2, red), homophones of the color characters (e.g. 洪, hong2, flood), characters that only shared the same syllable segment with the color characters (S+T−, e.g. 轰, hong1, boom), characters that shared the same tone but differed in segments with the color characters (S−T+, e.g. 瓶, ping2, bottle), and neutral characters (e.g. 牵, qian1, leading through). Data and analysis Planned t-tests were conducted in which participants’ naming accuracy rate and naming latency in each phonological condition were compared with the neutral condition. Findings Experiment 1 only showed the classic Stroop effect in the color character condition. In Experiment 2, in addition to the classic Stroop effect, the congruent homophone condition (e.g. 洪in red) showed a significant Stroop interference effect. These results suggested that for bilingual speakers with different scripts, phonological information in the non-target language may not be automatically activated even though the written words in the non-target language were visually presented. However, if the phonological information of the non-target language is activated in advance, it could lead to competition between the two languages, likely at both the phonological and lemma levels. Originality and significance This study is among the first to investigate whether the translation of a word is phonologically encoded in bilinguals using the Stroop paradigm. The findings improve our understanding of the

  20. Second Language Acquisition: Possible Insights from Studies on How Birds Acquire Song.

    ERIC Educational Resources Information Center

    Neapolitan, Denise M.; And Others

    1988-01-01

    Reviews research that demonstrates parallels between general linguistic and cognitive processes in human language acquisition and avian acquisition of song and discusses how such research may provide new insights into the processes of second-language acquisition. (Author/CB)

  1. Age of second language acquisition in multilinguals has an impact on gray matter volume in language-associated brain areas.

    PubMed

    Kaiser, Anelis; Eppenberger, Leila S; Smieskova, Renata; Borgwardt, Stefan; Kuenzli, Esther; Radue, Ernst-Wilhelm; Nitsch, Cordula; Bendfeldt, Kerstin

    2015-01-01

    Numerous structural studies have established that experience shapes and reshapes the brain throughout a lifetime. The impact of early development, however, is still a matter of debate. Further clues may come from studying multilinguals who acquired their second language at different ages. We investigated adult multilinguals who spoke three languages fluently, where the third language was learned in classroom settings, not before the age of 9 years. Multilinguals exposed to two languages simultaneously from birth (SiM) were contrasted with multinguals who acquired their first two languages successively (SuM). Whole brain voxel based morphometry revealed that, relative to SuM, SiM have significantly lower gray matter volume in several language-associated cortical areas in both hemispheres: bilaterally in medial and inferior frontal gyrus, in the right medial temporal gyrus and inferior posterior parietal gyrus, as well as in the left inferior temporal gyrus. Thus, as shown by others, successive language learning increases the volume of language-associated cortical areas. In brains exposed early on and simultaneously to more than one language, however, learning of additional languages seems to have less impact. We conclude that - at least with respect to language acquisition - early developmental influences are maintained and have an effect on experience-dependent plasticity well into adulthood.

  2. Using Oral Language Skills to Build on the Emerging Literacy of Adult English Learners. CAELA Network Brief

    ERIC Educational Resources Information Center

    Vinogradov, Patsy; Bigelow, Martha

    2010-01-01

    In addition to learning to read and write for the first time, adult English language learners with limited or emerging literacy skills must acquire oral English. Often, learners with limited print literacy in their first language have oral skills in English that exceed their English literacy skills (Geva & Zadeh, 2006). While this mismatch of oral…

  3. Prosodic Perception Problems in Spanish Dyslexia

    ERIC Educational Resources Information Center

    Cuetos, Fernando; Martínez-García, Cristina; Suárez-Coalla, Paz

    2018-01-01

    The aim of this study was to investigate the prosody abilities on top of phonological and visual abilities in children with dyslexia in Spanish that can be considered a syllable-timed language. The performances on prosodic tasks (prosodic perception, rise-time perception), phonological tasks (phonological awareness, rapid naming, verbal working…

  4. Sight-Singing Scores of High School Choristers with Extensive Training in Movable Solfege Syllables and Curwen Hand Signs

    ERIC Educational Resources Information Center

    McClung, Alan C.

    2008-01-01

    Randomly chosen high school choristers with extensive training in solfege syllables and Curwen hand signs (N = 38) are asked to sight-sing two melodies, one while using Curwen hand signs and the other without. Out of a perfect score of 16, the mean score with hand signs was 10.37 (SD = 4.23), and without hand signs, 10.84 (SD = 3.96). A…

  5. Maturational and Non-Maturational Factors in Heritage Language Acquisition

    ERIC Educational Resources Information Center

    Moon, Ji Hye

    2012-01-01

    This dissertation aims to understand the maturational and non-maturational aspects of early bilingualism and language attrition in heritage speakers who have acquired their L1 incompletely in childhood. The study highlights the influential role of age and input dynamics in early L1 development, where the timing of reduction in L1 input and the…

  6. Conversation and Language Acquisition: A Pragmatic Approach

    ERIC Educational Resources Information Center

    Clark, Eve V.

    2018-01-01

    Children acquire language in conversation. This is where they are exposed to the community language by more expert speakers. This exposure is effectively governed by adult reliance on pragmatic principles in conversation: Cooperation, Conventionality, and Contrast. All three play a central role in speakers' use of language for communication in…

  7. Students' Choice of Language and Initial Motivation for Studying Japanese at the University of Jyväskylä Language Centre

    ERIC Educational Resources Information Center

    Takala, Pauliina

    2015-01-01

    Elective language courses, particularly those starting from the beginner level, constitute their own special group within the communication and language course offerings of universities. The elementary courses of less commonly taught languages (LCTL), such as Japanese, provide students with the opportunity to acquire, among other benefits, a…

  8. The impact of visual sequencing of graphic symbols on the sentence construction output of children who have acquired language.

    PubMed

    Alant, Erna; du Plooy, Amelia; Dada, Shakila

    2007-01-01

    Although the sequence of graphic or pictorial symbols displayed on a communication board can have an impact on the language output of children, very little research has been conducted to describe this. Research in this area is particularly relevant for prioritising the importance of specific visual and graphic features in providing more effective and user-friendly access to communication boards. This study is concerned with understanding the impact ofspecific sequences of graphic symbol input on the graphic and spoken output of children who have acquired language. Forty participants were divided into two comparable groups. Each group was exposed to graphic symbol input with a certain word order sequence. The structure of input was either in typical English word order sequence Subject- Verb-Object (SVO) or in the word order sequence of Subject-Object-Verb (SOV). Both input groups had to answer six questions by using graphic output as well as speech. The findings indicated that there are significant differences in the PCS graphic output patterns of children who are exposed to graphic input in the SOV and SVO sequences. Furthermore, the output produced in the graphic mode differed considerably to the output produced in the spoken mode. Clinical implications of these findings are discussed

  9. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment

    PubMed Central

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed. PMID:24904454

  10. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment.

    PubMed

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

  11. Longitudinal Evaluation of Language Impairment in Youth With Perinatally Acquired Human Immunodeficiency Virus (HIV) and Youth With Perinatal HIV Exposure.

    PubMed

    Redmond, Sean M; Yao, Tzy-Jyun; Russell, Jonathan S; Rice, Mabel L; Hoffman, Howard J; Siberry, George K; Frederick, Toni; Purswani, Murli; Williams, Paige L

    2016-12-01

    Language impairment (LI) risk is increased for perinatally acquired human immunodeficiency virus-infected (PHIV) and perinatally exposed to HIV but uninfected (PHEU) youth. This study evaluates the persistence of LI in these groups. The Clinical Evaluation of Language Fundamentals was repeated on participants of the Pediatric HIV/AIDS Cohort Study Adolescent Master Protocol 18 months postbaseline. Regression models identified factors associated with change in standardized score (SC) and the resolution or development of LI. Of 319 participants, 112 had LI at baseline. Upon re-evaluation, SCs were highly stable and changes were similar in PHIV (n = 212) and PHEU (n = 107) participants. Those with family history of language delays had a 2.39 point lower mean increase in SCs than those without, after controlling for demographic and socioeconomic factors and baseline LI status. Among PHIV participants, CD4 count <350 cells/mm 3 was associated with lower mean SC change (4.32 points), and exposure to combination antiretroviral therapy (cART) or protease inhibitors (PIs) was associated with a higher mean SC change (5.93 and 4.19 points, respectively). Initial LI was persistent in most cases (78%); 20 new cases occurred (10%). Female sex was associated with higher odds of LI resolution. Among PHIV, duration and baseline cART and history of PI use were associated with LI resolution; higher percentage of detectable viral loads before baseline was associated with lower odds of resolution. The PHIV and PHEU youth are at risk for persistent LI, and family history of language delays was a risk factor for persistence of problems. Measures of successful HIV treatment predicted more favorable outcomes among PHIV youth. © The Author 2016. Published by Oxford University Press on behalf of the Pediatric Infectious Diseases Society. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. The effects of cognitive – linguistic variables and language experience on behavioural and kinematic performances in nonword learning

    PubMed Central

    Sasisekaran, Jayanthi; Weisberg, Sanford

    2013-01-01

    The aim of the present study was to investigate the effect of cognitive – linguistic variables and language experience on behavioral and kinematic measures of nonword learning in young adults. Group 1 consisted of thirteen participants who spoke American English as the first and only language. Group 2 consisted of seven participants with varying levels of proficiency in a second language. Logistic regression of the percent of correct productions revealed short-term memory to be a significant contributor. The bilingual group showed better performance compared to the monolinguals. Linear regression of the kinematic data revealed that the short – term memory variable contributed significantly to movement coordination. Differences were not observed between the bilingual and the monolingual speakers in kinematic performance. Nonword properties including syllable length and complexity influenced both behavioral and kinematic performance. The findings supported the observation that nonword repetition is multiply determined in adults. PMID:22476630

  13. Acquiring and Processing Verb Argument Structure: Distributional Learning in a Miniature Language

    ERIC Educational Resources Information Center

    Wonnacott, Elizabeth; Newport, Elissa L.; Tanenhaus, Michael K.

    2008-01-01

    Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some…

  14. Language Maintenance and Shift across Generations in Inner Mongolia

    ERIC Educational Resources Information Center

    Puthuval, Sarala

    2017-01-01

    Language shift happens when a group of people stops using one language in favor of another, such that subsequent generations no longer acquire the original language. Research on the sociolinguistics of language shift has tended to focus on languages in advanced states of endangerment, where most or all children in the community have already…

  15. Listening to Accented Speech in a Second Language: First Language and Age of Acquisition Effects

    ERIC Educational Resources Information Center

    Larraza, Saioa; Samuel, Arthur G.; Oñederra, Miren Lourdes

    2016-01-01

    Bilingual speakers must acquire the phonemic inventory of 2 languages and need to recognize spoken words cross-linguistically; a demanding job potentially made even more difficult due to dialectal variation, an intrinsic property of speech. The present work examines how bilinguals perceive second language (L2) accented speech and where…

  16. When Language of Instruction and Language of Application Differ: Cognitive Costs of Bilingual Mathematics Learning

    ERIC Educational Resources Information Center

    Saalbach, Henrik; Eckstein, Doris; Andri, Nicoletta; Hobi, Reto; Grabner, Roland H.

    2013-01-01

    Bilingual education programs implicitly assume that the acquired knowledge is represented in a language-independent way. This assumption, however, stands in strong contrast to research findings showing that information may be represented in a way closely tied to the specific language of instruction and learning. The present study aims to examine…

  17. Language Socialisation and Language Shift in the 1b Generation: A Study of Moroccan Adolescents in Italy

    ERIC Educational Resources Information Center

    di Lucca, Lucia; Masiero, Giovanna; Pallotti, Gabriele

    2008-01-01

    This paper reports on a longitudinal ethnographic study of the language socialisation of a group of Moroccan adolescents who migrated to Italy in the late 1990s. The approach is based on the notion of language socialisation, which sees the process of acquiring a language as linked to that of becoming a member of a culture. The participants live in…

  18. Learning the language of time: Children's acquisition of duration words.

    PubMed

    Tillman, Katharine A; Barner, David

    2015-05-01

    Children use time words like minute and hour early in development, but take years to acquire their precise meanings. Here we investigate whether children assign meaning to these early usages, and if so, how. To do this, we test their interpretation of seven time words: second, minute, hour, day, week, month, and year. We find that preschoolers infer the orderings of time words (e.g., hour>minute), but have little to no knowledge of the absolute durations they encode. Knowledge of absolute duration is learned much later in development - many years after children first start using time words in speech - and in many children does not emerge until they have acquired formal definitions for the words. We conclude that associating words with the perception of duration does not come naturally to children, and that early intuitive meanings of time words are instead rooted in relative orderings, which children may infer from their use in speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Teaching in Two Tongues: Rethinking the Role of Language(s) in Teacher Education in India

    ERIC Educational Resources Information Center

    Menon, Shailaja; Viswanatha, Vanamala; Sahi, Jane

    2014-01-01

    This article is a sharing of emergent ideas about the potential role of languages in teacher education (TE) programmes in multilingual contexts in India. Languages play a critical role in TE programmes where they shape both the learning as well as the future teaching of prospective teachers. This role acquires particular significance in…

  20. Phonological Acquisition of Korean Consonants in Conversational Speech Produced by Young Korean Children

    ERIC Educational Resources Information Center

    Kim, Minjung; Kim, Soo-Jin; Stoel-Gammon, Carol

    2017-01-01

    This study investigates the phonological acquisition of Korean consonants using conversational speech samples collected from sixty monolingual typically developing Korean children aged two, three, and four years. Phonemic acquisition was examined for syllable-initial and syllable-final consonants. Results showed that Korean children acquired stops…

  1. Language, Thought, and Real Nouns

    ERIC Educational Resources Information Center

    Barner, David; Inagaki, Shunji; Li, Peggy

    2009-01-01

    We test the claim that acquiring a mass-count language, like English, causes speakers to think differently about entities in the world, relative to speakers of classifier languages like Japanese. We use three tasks to assess this claim: object-substance rating, quantity judgment, and word extension. Using the first two tasks, we present evidence…

  2. Advanced Language Attrition of Spanish in Contact with Brazilian Portuguese

    ERIC Educational Resources Information Center

    Iverson, Michael Bryan

    2012-01-01

    Language acquisition research frequently concerns itself with linguistic development and result of the acquisition process with respect to a first or subsequent language. For some, it seems tacitly assumed that a first language, once acquired, remains stable, regardless of exposure to and the acquisition of additional language(s) beyond the first…

  3. Generating Language Activities in Real-Time for English Learners Using Language Muse

    ERIC Educational Resources Information Center

    Burstein, Jill; Madnani, Nitin; Sabatini, John; McCaffrey, Dan; Biggers, Kietha; Dreier, Kelsey

    2017-01-01

    K-12 education standards in the U.S. require all students to read complex texts across many subject areas. The "Language Muse™ Activity Palette" is a web-based language-instruction application that uses NLP algorithms and lexical resources to automatically generate language activities and support English language learners' content…

  4. Progressive Modularization: Reframing Our Understanding of Typical and Atypical Language Development

    ERIC Educational Resources Information Center

    D'Souza, Dean; Filippi, Roberto

    2017-01-01

    The ability to acquire language is a critical part of human development. Yet there is no consensus on how the skill emerges in early development. Does it constitute an innately-specified, language-processing module or is it acquired progressively? One of Annette Karmiloff-Smith's (1938-2016) key contributions to developmental science addresses…

  5. Tense Times and Language Planning

    ERIC Educational Resources Information Center

    Bianco, Joseph Lo

    2008-01-01

    This article discusses the different effects arising from deliberations on language policy in conferences dominated by professionals compared with the interests, priorities and demands of geo-political and military security coming from defense interests. The paper proposes a three dimensional way to understand language policy: text, discourse and…

  6. The "Fundamental Pedogagical Principle" in Second Language Teaching.

    ERIC Educational Resources Information Center

    Krashen, Stephen D.

    1981-01-01

    A fundamental principle of second language acquisition is stated and applied to language teaching. The principle states that learners acquire a second language when they receive comprehensible input in situations where their affective filters are sufficiently low. The theoretical background of this principle consists of five hypotheses: the…

  7. The neural dynamics of song syntax in songbirds

    NASA Astrophysics Data System (ADS)

    Jin, Dezhe

    2010-03-01

    Songbird is ``the hydrogen atom'' of the neuroscience of complex, learned vocalizations such as human speech. Songs of Bengalese finch consist of sequences of syllables. While syllables are temporally stereotypical, syllable sequences can vary and follow complex, probabilistic syntactic rules, which are rudimentarily similar to grammars in human language. Songbird brain is accessible to experimental probes, and is understood well enough to construct biologically constrained, predictive computational models. In this talk, I will discuss the structure and dynamics of neural networks underlying the stereotypy of the birdsong syllables and the flexibility of syllable sequences. Recent experiments and computational models suggest that a syllable is encoded in a chain network of projection neurons in premotor nucleus HVC (proper name). Precisely timed spikes propagate along the chain, driving vocalization of the syllable through downstream nuclei. Through a computational model, I show that that variable syllable sequences can be generated through spike propagations in a network in HVC in which the syllable-encoding chain networks are connected into a branching chain pattern. The neurons mutually inhibit each other through the inhibitory HVC interneurons, and are driven by external inputs from nuclei upstream of HVC. At a branching point that connects the final group of a chain to the first groups of several chains, the spike activity selects one branch to continue the propagation. The selection is probabilistic, and is due to the winner-take-all mechanism mediated by the inhibition and noise. The model predicts that the syllable sequences statistically follow partially observable Markov models. Experimental results supporting this and other predictions of the model will be presented. We suggest that the syntax of birdsong syllable sequences is embedded in the connection patterns of HVC projection neurons.

  8. Relationships between Language Teachers' Time-Management Skills, Creativity, and Burnout: A Mediation Analysis

    ERIC Educational Resources Information Center

    Mahmoodi-Shahrebabaki, Masoud

    2015-01-01

    The present study aimed to investigate the effects of language teachers' time management and creativity skills on their burnout levels. The sample consisted of 213 Iranian language teachers. The Maslach Burnout Inventory (MBI), Creative Behavior Inventory (CBI) and Time Management Skills Questionnaire (TMSQ) were employed for data collection. By…

  9. Providing written language services in the schools: the time is now.

    PubMed

    Fallon, Karen A; Katz, Lauren A

    2011-01-01

    The current study was conducted to investigate the provision of written language services by school-based speech-language pathologists (SLPs). Specifically, the study examined SLPs' knowledge, attitudes, and collaborative practices in the area of written language services as well as the variables that impact provision of these services. Public school-based SLPs from across the country were solicited for participation in an online, Web-based survey. Data from 645 full-time SLPs from 49 states were evaluated using descriptive statistics and logistic regression. Many school-based SLPs reported not providing any services in the area of written language to students with written language weaknesses. Knowledge, attitudes, and collaborative practices were mixed. A logistic regression revealed three variables likely to predict high levels of service provision in the area of written language. Data from the current study revealed that many struggling readers and writers on school-based SLPs' caseloads are not receiving services from their SLPs. Implications for SLPs' preservice preparation, continuing education, and doctoral preparation are discussed.

  10. Computer-Assisted Language Learning (CALL) in Support of (Re)-Learning Native Languages: The Case of Runyakitara

    ERIC Educational Resources Information Center

    Katushemererwe, Fridah; Nerbonne, John

    2015-01-01

    This study presents the results from a computer-assisted language learning (CALL) system of Runyakitara (RU_CALL). The major objective was to provide an electronic language learning environment that can enable learners with mother tongue deficiencies to enhance their knowledge of grammar and acquire writing skills in Runyakitara. The system…

  11. Driving Under the Influence (of Language).

    PubMed

    Barrett, Daniel Paul; Bronikowski, Scott Alan; Yu, Haonan; Siskind, Jeffrey Mark

    2017-06-09

    We present a unified framework which supports grounding natural-language semantics in robotic driving. This framework supports acquisition (learning grounded meanings of nouns and prepositions from human sentential annotation of robotic driving paths), generation (using such acquired meanings to generate sentential description of new robotic driving paths), and comprehension (using such acquired meanings to support automated driving to accomplish navigational goals specified in natural language). We evaluate the performance of these three tasks by having independent human judges rate the semantic fidelity of the sentences associated with paths. Overall, machine performance is 74.9%, while the performance of human annotators is 83.8%.

  12. Linguistic Evolution through Language Acquisition: Formal and Computational Models.

    ERIC Educational Resources Information Center

    Briscoe, Ted, Ed.

    This collection of papers examines how children acquire language and how this affects language change over the generations. It proceeds from the basis that it is important to address not only the language faculty per se within the framework of evolutionary theory, but also the origins and subsequent development of languages themselves, suggesting…

  13. [Study on the acquiring data time and intervals for measuring performance of air cleaner on formaldehyde].

    PubMed

    Tang, Zhigang; Wang, Guifang; Xu, Dongqun; Han, Keqin; Li, Yunpu; Zhang, Aijun; Dong, Xiaoyan

    2004-09-01

    The measuring time and measuring intervals to evaluate different type of air cleaner performance to remove formaldehyde were provided. The natural decay measurement and formaldehyde removal measurement were conducted in 1.5 m3 and 30 m3 test chamber. The natural decay rate was determined by acquiring formaldehyde concentration data at 15 minute intervals for 2.5 hours. The measured decay rate was determined by acquiring formaldehyde concentration data at 5 minute intervals for 1.2 hours. When the wind power of air cleaner is smaller than 30 m3/h or measuring performance of no wind power air clearing product, the 1.5 m3 test chamber can be used. Both the natural decay rate and the measured decay rate are determined by acquiring formaldehyde concentration data at 8 minute intervals for 64 minutes. There were different measuring time and measuring intervals to evaluate different type of air cleaner performance to remove formaldehyde.

  14. From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language

    PubMed Central

    Poliva, Oren

    2016-01-01

    The auditory cortex communicates with the frontal lobe via the middle temporal gyrus (auditory ventral stream; AVS) or the inferior parietal lobule (auditory dorsal stream; ADS). Whereas the AVS is ascribed only with sound recognition, the ADS is ascribed with sound localization, voice detection, prosodic perception/production, lip-speech integration, phoneme discrimination, articulation, repetition, phonological long-term memory and working memory. Previously, I interpreted the juxtaposition of sound localization, voice detection, audio-visual integration and prosodic analysis, as evidence that the behavioral precursor to human speech is the exchange of contact calls in non-human primates. Herein, I interpret the remaining ADS functions as evidence of additional stages in language evolution. According to this model, the role of the ADS in vocal control enabled early Homo (Hominans) to name objects using monosyllabic calls, and allowed children to learn their parents' calls by imitating their lip movements. Initially, the calls were forgotten quickly but gradually were remembered for longer periods. Once the representations of the calls became permanent, mimicry was limited to infancy, and older individuals encoded in the ADS a lexicon for the names of objects (phonological lexicon). Consequently, sound recognition in the AVS was sufficient for activating the phonological representations in the ADS and mimicry became independent of lip-reading. Later, by developing inhibitory connections between acoustic-syllabic representations in the AVS and phonological representations of subsequent syllables in the ADS, Hominans became capable of concatenating the monosyllabic calls for repeating polysyllabic words (i.e., developed working memory). Finally, due to strengthening of connections between phonological representations in the ADS, Hominans became capable of encoding several syllables as a single representation (chunking). Consequently, Hominans began vocalizing and

  15. Language Is a Complex Adaptive System: Position Paper

    ERIC Educational Resources Information Center

    Beckner, Clay; Blythe, Richard; Bybee, Joan; Christiansen, Morten H.; Croft, William; Ellis, Nick C.; Holland, John; Ke, Jinyun; Larsen-Freeman, Diane; Schoenemann, Tom

    2009-01-01

    Language has a fundamentally social function. Processes of human interaction along with domain-general cognitive processes shape the structure and knowledge of language. Recent research in the cognitive sciences has demonstrated that patterns of use strongly affect how language is acquired, is used, and changes. These processes are not independent…

  16. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    PubMed

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  17. The impact of early bilingualism on controlling a language learned late: an ERP study

    PubMed Central

    Martin, Clara D.; Strijkers, Kristof; Santesteban, Mikel; Escera, Carles; Hartsuiker, Robert J.; Costa, Albert

    2013-01-01

    This study asks whether early bilingual speakers who have already developed a language control mechanism to handle two languages control a dominant and a late acquired language in the same way as late bilingual speakers. We therefore, compared event-related potentials in a language switching task in two groups of participants switching between a dominant (L1) and a weak late acquired language (L3). Early bilingual late learners of an L3 showed a different ERP pattern (larger N2 mean amplitude) as late bilingual late learners of an L3. Even though the relative strength of languages was similar in both groups (a dominant and a weak late acquired language), they controlled their language output in a different manner. Moreover, the N2 was similar in two groups of early bilinguals tested in languages of different strength. We conclude that early bilingual learners of an L3 do not control languages in the same way as late bilingual L3 learners –who have not achieved native-like proficiency in their L2– do. This difference might explain some of the advantages early bilinguals have when learning new languages. PMID:24204355

  18. Language and the African American Child

    ERIC Educational Resources Information Center

    Green, Lisa J.

    2011-01-01

    How do children acquire African American English? How do they develop the specific language patterns of their communities? Drawing on spontaneous speech samples and data from structured elicitation tasks, this book explains the developmental trends in the children's language. It examines topics such as the development of tense/aspect marking,…

  19. HOW TO LEARN AN UNWRITTEN LANGUAGE.

    ERIC Educational Resources Information Center

    GUDSCHINSKY, SARAH C.

    A PRACTICAL GUIDE FOR THE ANTHROPOLOGY STUDENT CONFRONTED WITH LEARNING A LANGUAGE IN THE FIELD, THIS BOOK FOCUSES ON ACQUIRING EVERYDAY CONVERSATION RATHER THAN DIFFICULT LINGUISTIC PROBLEMS. THE FORM AND CONTENT ARE BASED ON THE FOLLOWING BASIC PREMISES--(1) LEARNING A LANGUAGE CONSISTS OF DISCOVERING AND CONTROLLING AS AUTOMATIC HABITS THE…

  20. Language Learning by Dint of Social Cognitive Advancement

    ERIC Educational Resources Information Center

    Mathew, Bincy; Raja, B. William Dharma

    2015-01-01

    Language is of vital importance to human beings. It is a means of communication and it has specific cognitive links. Advanced social cognition is necessary for children to acquire language, and sophisticated mind-reading abilities to assume word meanings and communicate pragmatically. Language can be defined as a bi-directional system that permits…

  1. Language and Literacy Development in Prelingually-Deaf Children

    ERIC Educational Resources Information Center

    Salmani Nodoushan, Mohammad Ali

    2008-01-01

    This paper attempts to address the issue of language development in hearing impaired children. It argues that interpreters, teachers or peers can provide deaf children with language exposure so that they can acquire their native languages more easily. It also argues that the provision of a developmentally appropriate print-rich environments is the…

  2. Giving English Language Learners the Time They Need to Succeed: Profiles of Three Expanded Learning Time Schools

    ERIC Educational Resources Information Center

    Farbman, David A.

    2015-01-01

    With the number of students who are English language learners (ELLs) likely to double in coming years, it is more important than ever for schools across the U.S. to design and implement educational practices and strategies that best meet ELLs' learning needs, says the report, "Giving English Language Learners the Time They Need to…

  3. Some Myths You May Have Heard about First Language Acquisition.

    ERIC Educational Resources Information Center

    Gathercole, Virginia C.

    1988-01-01

    Reviews research and empirical evidence to refute three first language acquisition myths: (1) comprehension precedes production; (2) children acquire language in a systematic, rule-governed way; and (3) the impetus behind first language acquisition is communicative need. (Author/CB)

  4. Social Interaction Affects Neural Outcomes of Sign Language Learning As a Foreign Language in Adults.

    PubMed

    Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta

    2017-01-01

    Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.

  5. Behavioral and computational aspects of language and its acquisition

    NASA Astrophysics Data System (ADS)

    Edelman, Shimon; Waterfall, Heidi

    2007-12-01

    One of the greatest challenges facing the cognitive sciences is to explain what it means to know a language, and how the knowledge of language is acquired. The dominant approach to this challenge within linguistics has been to seek an efficient characterization of the wealth of documented structural properties of language in terms of a compact generative grammar-ideally, the minimal necessary set of innate, universal, exception-less, highly abstract rules that jointly generate all and only the observed phenomena and are common to all human languages. We review developmental, behavioral, and computational evidence that seems to favor an alternative view of language, according to which linguistic structures are generated by a large, open set of constructions of varying degrees of abstraction and complexity, which embody both form and meaning and are acquired through socially situated experience in a given language community, by probabilistic learning algorithms that resemble those at work in other cognitive modalities.

  6. Second Language Acquisition and the Critical Period Hypothesis. Second Language Acquisition Research: Theoretical and Methodological Issues.

    ERIC Educational Resources Information Center

    Birdsong, David, Ed.

    This book considers the question of whether, or to what extent, a critical period limits the acquisition of a first language as well as a second language acquired postpubertally. The diversity of opinion on this question is represented in this volume. It is a question that has been approached by researchers working in linguistic theory, evolution…

  7. Deficits in Coordinative Bimanual Timing Precision in Children With Specific Language Impairment

    PubMed Central

    Goffman, Lisa; Zelaznik, Howard N.

    2017-01-01

    Purpose Our objective was to delineate components of motor performance in specific language impairment (SLI); specifically, whether deficits in timing precision in one effector (unimanual tapping) and in two effectors (bimanual clapping) are observed in young children with SLI. Method Twenty-seven 4- to 5-year-old children with SLI and 21 age-matched peers with typical language development participated. All children engaged in a unimanual tapping and a bimanual clapping timing task. Standard measures of language and motor performance were also obtained. Results No group differences in timing variability were observed in the unimanual tapping task. However, compared with typically developing peers, children with SLI were more variable in their timing precision in the bimanual clapping task. Nine of the children with SLI performed greater than 1 SD below the mean on a standardized motor assessment. The children with low motor performance showed the same profile as observed across all children with SLI, with unaffected unimanual and impaired bimanual timing precision. Conclusions Although unimanual timing is unaffected, children with SLI show a deficit in timing that requires bimanual coordination. We propose that the timing deficits observed in children with SLI are associated with the increased demands inherent in bimanual performance. PMID:28174821

  8. Kinematic investigation of lingual movement in words of increasing length in acquired apraxia of speech.

    PubMed

    Bartle-Meyer, Carly J; Goozee, Justine V; Murdoch, Bruce E

    2009-02-01

    The current study aimed to use electromagnetic articulography (EMA) to investigate the effect of increasing word length on lingual kinematics in acquired apraxia of speech (AOS). Tongue-tip and tongue-back movement was recorded for five speakers with AOS and a concomitant aphasia (mean age = 53.6 years; SD = 12.60) during target consonant production (i.e. /t, s, k/ singletons; /kl, sk/ clusters), for one and two syllable stimuli. The results obtained for each of the participants with AOS were individually compared to those obtained by a control group (n = 12; mean age = 52.08 years; SD = 12.52). Results indicated that the participants with AOS exhibited longer movement durations and, in some instances, larger tongue movements during consonant singletons and consonant cluster constituents embedded within mono- and multisyllabic utterances. Despite this, two participants with AOS exhibited a word length effect that was comparable with the control speakers, and possibly indicative of an intact phonological system.

  9. Functional flexibility of infant vocalization and the emergence of language

    PubMed Central

    Oller, D. Kimbrough; Buder, Eugene H.; Ramsdell, Heather L.; Warlaumont, Anne S.; Chorna, Lesya; Bakeman, Roger

    2013-01-01

    We report on the emergence of functional flexibility in vocalizations of human infants. This vastly underappreciated capability becomes apparent when prelinguistic vocalizations express a full range of emotional content—positive, neutral, and negative. The data show that at least three types of infant vocalizations (squeals, vowel-like sounds, and growls) occur with this full range of expression by 3–4 mo of age. In contrast, infant cry and laughter, which are species-specific signals apparently homologous to vocal calls in other primates, show functional stability, with cry overwhelmingly expressing negative and laughter positive emotional states. Functional flexibility is a sine qua non in spoken language, because all words or sentences can be produced as expressions of varying emotional states and because learning conventional “meanings” requires the ability to produce sounds that are free of any predetermined function. Functional flexibility is a defining characteristic of language, and empirically it appears before syntax, word learning, and even earlier-developing features presumed to be critical to language (e.g., joint attention, syllable imitation, and canonical babbling). The appearance of functional flexibility early in the first year of human life is a critical step in the development of vocal language and may have been a critical step in the evolution of human language, preceding protosyntax and even primitive single words. Such flexible affect expression of vocalizations has not yet been reported for any nonhuman primate but if found to occur would suggest deep roots for functional flexibility of vocalization in our primate heritage. PMID:23550164

  10. Multilayer network of language: A unified framework for structural analysis of linguistic subsystems

    NASA Astrophysics Data System (ADS)

    Martinčić-Ipšić, Sanda; Margan, Domagoj; Meštrović, Ana

    2016-09-01

    Recently, the focus of complex networks' research has shifted from the analysis of isolated properties of a system toward a more realistic modeling of multiple phenomena - multilayer networks. Motivated by the prosperity of multilayer approach in social, transport or trade systems, we introduce the multilayer networks for language. The multilayer network of language is a unified framework for modeling linguistic subsystems and their structural properties enabling the exploration of their mutual interactions. Various aspects of natural language systems can be represented as complex networks, whose vertices depict linguistic units, while links model their relations. The multilayer network of language is defined by three aspects: the network construction principle, the linguistic subsystem and the language of interest. More precisely, we construct a word-level (syntax and co-occurrence) and a subword-level (syllables and graphemes) network layers, from four variations of original text (in the modeled language). The analysis and comparison of layers at the word and subword-levels are employed in order to determine the mechanism of the structural influences between linguistic units and subsystems. The obtained results suggest that there are substantial differences between the networks' structures of different language subsystems, which are hidden during the exploration of an isolated layer. The word-level layers share structural properties regardless of the language (e.g. Croatian or English), while the syllabic subword-level expresses more language dependent structural properties. The preserved weighted overlap quantifies the similarity of word-level layers in weighted and directed networks. Moreover, the analysis of motifs reveals a close topological structure of the syntactic and syllabic layers for both languages. The findings corroborate that the multilayer network framework is a powerful, consistent and systematic approach to model several linguistic subsystems

  11. The Theory of Adaptive Dispersion and Acoustic-phonetic Properties of Cross-language Lexical-tone Systems

    NASA Astrophysics Data System (ADS)

    Alexander, Jennifer Alexandra

    Lexical-tone languages use fundamental frequency (F0/pitch) to convey word meaning. About 41.8% of the world's languages use lexical tone (Maddieson, 2008), yet those systems are under-studied. I aim to increase our understanding of speech-sound inventory organization by extending to tone-systems a model of vowel-system organization, the Theory of Adaptive Dispersion (TAD) (Liljencrants and Lindblom, 1972). This is a cross-language investigation of whether and how the size of a tonal inventory affects (A) acoustic tone-space size and (B) dispersion of tone categories within the tone-space. I compared five languages with very different tone inventories: Cantonese (3 contour, 3 level tones); Mandarin (3 contour, 1 level tone); Thai (2 contour, 3 level tones); Yoruba (3 level tones only); and Igbo (2 level tones only). Six native speakers (3 female) of each language produced 18 CV syllables in isolation, with each of his/her language's tones, six times. I measured tonal F0 across the vowel at onset, midpoint, and offglide. Tone-space size was the F0 difference in semitones (ST) between each language's highest and lowest tones. Tone dispersion was the F0 distance (ST) between two tones shared by multiple languages. Following the TAD, I predicted that languages with larger tone inventories would have larger tone-spaces. Against expectations, tone-space size was fixed across level-tone languages at midpoint and offglide, and across contour-tone languages (except Thai) at offglide. However, within each language type (level-tone vs. contour-tone), languages with smaller tone inventories had larger tone spaces at onset. Tone-dispersion results were also unexpected. The Cantonese mid-level tone was further dispersed from a tonal baseline than the Yoruba mid-level tone; Cantonese mid-level tone dispersion was therefore greater than theoretically necessary. The Cantonese high-level tone was also further dispersed from baseline than the Mandarin high-level tone -- at midpoint

  12. Modeling Coevolution between Language and Memory Capacity during Language Origin

    PubMed Central

    Gong, Tao; Shuai, Lan

    2015-01-01

    Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language. PMID:26544876

  13. Modeling Coevolution between Language and Memory Capacity during Language Origin.

    PubMed

    Gong, Tao; Shuai, Lan

    2015-01-01

    Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language.

  14. Perceptual context and individual differences in the language proficiency of preschool children.

    PubMed

    Banai, Karen; Yifat, Rachel

    2016-02-01

    Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Formal semantic specifications as implementation blueprints for real-time programming languages

    NASA Technical Reports Server (NTRS)

    Feyock, S.

    1981-01-01

    Formal definitions of language and system semantics provide highly desirable checks on the correctness of implementations of programming languages and their runtime support systems. If these definitions can give concrete guidance to the implementor, major increases in implementation accuracy and decreases in implementation effort can be achieved. It is shown that of the wide variety of available methods the Hgraph (hypergraph) definitional technique (Pratt, 1975), is best suited to serve as such an implementation blueprint. A discussion and example of the Hgraph technique is presented, as well as an overview of the growing body of implementation experience of real-time languages based on Hgraph semantic definitions.

  16. The Relevance of Metrical Information in Early Prosodic Word Acquisition: A Comparison of Catalan and Spanish

    ERIC Educational Resources Information Center

    Prieto, Pilar

    2006-01-01

    This paper focuses on the development of Prosodic Word shapes in Catalan, a language which differs from both Spanish and English in the distribution of PW structures. Of particular interest are the truncations of initial unstressed syllables, and how these develop over time. Developmental qualitative and quantitative data from seven…

  17. The Role of Nonspeech Rhythm in Spanish Word Reading

    ERIC Educational Resources Information Center

    González-Trujillo, M. Carmen; Defior, Sylvia; Gutiérrez-Palma, Nicolás

    2014-01-01

    Recent literacy research shows an increasing interest in the influence of prosody on literacy acquisition. The current study examines the relationship of nonspeech rhythmic skills to children's reading acquisition, and their possible relation to stress assignment in Spanish, a syllable-timed language. Sixty-six third graders with no reading…

  18. Using Emotions and Personal Memory Associations to Acquire Vocabulary

    ERIC Educational Resources Information Center

    Randolph, Patrick T.

    2018-01-01

    Of all the possible tools available to help out English language Learners (ELLs) acquire vocabulary, the use of emotions is one of the most powerful because "we are learning that emotions are the result of multiple brain and body systems that are distributed over the whole person". If we go one step further and connect emotions to…

  19. Hypermedia and Vocabulary Acquisition for Second Language

    ERIC Educational Resources Information Center

    Meli, Rocio

    2009-01-01

    The purpose of this study was to examine the impact of multimedia as a delivery tool for enhancing vocabulary in second-language classrooms. The mixed method design focused on specific techniques to help students acquire Spanish vocabulary and communication skills. The theoretical framework for this study consisted of second language theories…

  20. Immersing the Library in English Language Learning

    ERIC Educational Resources Information Center

    Riley, Bobby

    2008-01-01

    The author relates how his school has a very active English Language Learner (ELL) program. ELL students typically have varying levels of social and academic language, but almost always have some English proficiency. Recently, his school established a Newcomer Program that drastically changed the school system. They acquired students lacking any…

  1. Home environmental influences on children's language and reading skills in a genetically sensitive design: Are socioeconomic status and home literacy environment environmental mediators and moderators?

    PubMed

    Chow, Bonnie Wing-Yin; Ho, Connie Suk-Han; Wong, Simpson W L; Waye, Mary M Y; Zheng, Mo

    2017-12-01

    This twin study examined how family socioeconomic status (SES) and home literacy environment (HLE) contributes to Chinese language and reading skills. It included 312 Chinese twin pairs aged 3 to 11. Children were individually administered tasks of Chinese word reading, receptive vocabulary and reading-related cognitive skills, and nonverbal reasoning ability. Information on home environment was collected through parent-reported questionnaires. Results showed that SES and HLE mediated shared environmental influences but did not moderate genetic influences on general language and reading abilities. Also, SES and HLE mediated shared environmental contributions to receptive vocabulary and syllable and rhyme awareness, but not orthographic skills. The findings of this study add to past twin studies that focused on alphabetic languages, suggesting that these links could be universal across languages. They also extend existing findings on SES and HLE's contributions to reading-related cognitive skills. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  2. Bilingual First Language Acquisition: Exploring the Limits of the Language Faculty.

    ERIC Educational Resources Information Center

    Genesee, Fred

    2001-01-01

    Reviews current research in three domains of bilingual acquisition: pragmatic features of bilingual code mixing, grammatical constraints on child bilingual code mixing, and bilingual syntactic development. Examines implications from these domains for the understanding of the limits of the mental faculty to acquire language. (Author/VWL)

  3. Developing language in a developing body: the relationship between motor development and language development.

    PubMed

    Iverson, Jana M

    2010-03-01

    ABSTRACTDuring the first eighteen months of life, infants acquire and refine a whole set of new motor skills that significantly change the ways in which the body moves in and interacts with the environment. In this review article, I argue that motor acquisitions provide infants with an opportunity to practice skills relevant to language acquisition before they are needed for that purpose; and that the emergence of new motor skills changes infants' experience with objects and people in ways that are relevant for both general communicative development and the acquisition of language. Implications of this perspective for current views of co-occurring language and motor impairments and for methodology in the field of child language research are also considered.

  4. Grammaticality Sensitivity in Children with Early Focal Brain Injury and Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Wulfeck, Beverly; Bates, Elizabeth; Krupa-Kwiatkowski, Magda; Saltzman, Danna

    2004-01-01

    Grammaticality judgments and processing times associated with violation detection were examined in typically developing children, children with focal brain lesions (FL) acquired early in life, and children with specific language impairment (SLI). Grammatical sensitivity in the FL group, while below typically developing children, was above levels…

  5. Consciousness-Raising and Prepositions

    ERIC Educational Resources Information Center

    Hendricks, Monica

    2010-01-01

    For a variety of reasons, learning English prepositions is notoriously difficult and a slow, gradual process for English as a Second Language (ESL) students. To begin, English prepositions typically are short, single-syllable or two-syllable words that are seldom stressed when speaking and therefore often not articulated clearly or heard…

  6. Language and Society. Course HP06a: Part Time BA Degree Programme.

    ERIC Educational Resources Information Center

    Griffith Univ., Brisbane (Australia). School of Humanities.

    This course, one of 16 sequential courses comprising phase one of a part-time Bachelor of Arts degree program in Australian Studies, examines a number of theoretical approaches to the study of language, particularly those which place language in a social context. It is designed for independent study combined with tutorial sessions. Chapter 1 is an…

  7. First language acquisition differs from second language acquisition in prelingually deaf signers: Evidence from sensitivity to grammaticality judgement in British Sign Language

    PubMed Central

    Cormier, Kearsy; Schembri, Adam; Vinson, David; Orfanidou, Eleni

    2012-01-01

    Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life. PMID:22578601

  8. A Randomized Controlled Trial for Children with Childhood Apraxia of Speech Comparing Rapid Syllable Transition Treatment and the Nuffield Dyspraxia Programme-Third Edition

    ERIC Educational Resources Information Center

    Murray, Elizabeth; McCabe, Patricia; Ballard, Kirrie J.

    2015-01-01

    Purpose: This randomized controlled trial compared the experimental Rapid Syllable Transition (ReST) treatment to the Nuffield Dyspraxia Programme-Third Edition (NDP3; Williams & Stephens, 2004), used widely in clinical practice in Australia and the United Kingdom. Both programs aim to improve speech motor planning/programming for children…

  9. Language planning for the 21st century: revisiting bilingual language policy for deaf children.

    PubMed

    Knoors, Harry; Marschark, Marc

    2012-01-01

    For over 25 years in some countries and more recently in others, bilingual education involving sign language and the written/spoken vernacular has been considered an essential educational intervention for deaf children. With the recent growth in universal newborn hearing screening and technological advances such as digital hearing aids and cochlear implants, however, more deaf children than ever before have the potential for acquiring spoken language. As a result, the question arises as to the role of sign language and bilingual education for deaf children, particularly those who are very young. On the basis of recent research and fully recognizing the historical sensitivity of this issue, we suggest that language planning and language policy should be revisited in an effort to ensure that they are appropriate for the increasingly diverse population of deaf children.

  10. Foreign Language Anxiety on a Massive Open Online Language Course

    ERIC Educational Resources Information Center

    Bárkányi, Zsuzsanna; Melchor-Couto, Sabela

    2017-01-01

    This paper examines learner attitudes, self-efficacy beliefs, and anxiety in a beginners' Spanish Language Massive Open Online Course (LMOOC) by answering three research questions: (1) how do learners feel about acquiring speaking skills on an LMOOC?; (2) do they experience anxiety with regards to speaking?; and (3) do their self-efficacy beliefs…

  11. Rehabilitation of discourse impairments after acquired brain injury

    PubMed Central

    Gindri, Gigiane; Pagliarin, Karina Carlesso; Casarin, Fabíola Schwengber; Branco, Laura Damiani; Ferré, Perrine; Joanette, Yves; Fonseca, Rochele Paz

    2014-01-01

    Language impairments in patients with acquired brain injury can have a negative impact on social life as well as on other cognitive domains. Discourse impairments are among the most commonly reported communication deficits among patients with acquired brain damage. Despite advances in the development of diagnostic tools for detecting such impairments, few studies have investigated interventions to rehabilitate patients presenting with these conditions. Objective The aim of this study was to present a systematic review of the methods used in the rehabilitation of discourse following acquired brain injury. Methods The PubMed database was searched for articles using the following keywords: "rehabilitation", "neurological injury", "communication" and "discursive abilities". Results A total of 162 abstracts were found, but only seven of these met criteria for inclusion in the review. Four studies involved samples of individuals with aphasia whereas three studies recruited samples of individuals with traumatic brain injury. Conclusion All but one article found that patient performance improved following participation in a discourse rehabilitation program. PMID:29213880

  12. Syllabic encoding during overt speech production in Cantonese: Evidence from temporal brain responses.

    PubMed

    Wong, Andus Wing-Kuen; Wang, Jie; Ng, Tin-Yan; Chen, Hsuan-Chih

    2016-10-01

    The time course of phonological encoding in overt Cantonese disyllabic word production was investigated using a picture-word interference task with concurrent recording of the event-related brain potentials (ERPs). Participants were asked to name aloud individually presented pictures and ignore a distracting Chinese character. Participants' naming responses were faster, relative to an unrelated control, when the distractor overlapped with the target's word-initial or word-final syllables. Furthermore, ERP waves in the syllable-related conditions were more positive-going than those in the unrelated control conditions from 500ms to 650ms post target onset (i.e., a late positivity). The mean and peak amplitudes of this late positivity correlated with the size of phonological facilitation. More importantly, the onset of the late positivity associated with word-initial syllable priming was 44ms earlier than that associated with word-final syllable priming, suggesting that phonological encoding in overt speech runs incrementally and the encoding duration for one syllable unit is approximately 44ms. Although the size of effective phonological units might vary across languages, as suggested by previous speech production studies, the present data indicate that the incremental nature of phonological encoding is a universal mechanism. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Language, Space, Time: Anthropological Tools and Scientific Exploration on Mars

    NASA Technical Reports Server (NTRS)

    Wales, Roxana

    2005-01-01

    This viewgraph presentation reviews the importance of social science disciplines in the scientific exploration of Mars. The importance of language, workspace, and time differences are reviewed. It would appear that the social scientist perspective in developing a completely new workspace, keeping track of new vocabulary and the different time zones (i.e., terrestrial and Martian) was useful.

  14. Phonotactic Diversity Predicts the Time Depth of the World’s Language Families

    PubMed Central

    Rama, Taraka

    2013-01-01

    The ASJP (Automated Similarity Judgment Program) described an automated, lexical similarity-based method for dating the world’s language groups using 52 archaeological, epigraphic and historical calibration date points. The present paper describes a new automated dating method, based on phonotactic diversity. Unlike ASJP, our method does not require any information on the internal classification of a language group. Also, the method can use all the available word lists for a language and its dialects eschewing the debate on ‘language’ vs. ‘dialect’. We further combine these dates and provide a new baseline which, to our knowledge, is the best one. We make a systematic comparison of our method, ASJP’s dating procedure, and combined dates. We predict time depths for world’s language families and sub-families using this new baseline. Finally, we explain our results in the model of language change given by Nettle. PMID:23691003

  15. Review Article: Recent Publications on Research Methods in Second Language Acquisition

    ERIC Educational Resources Information Center

    Ionin, Tania

    2013-01-01

    The central goal of the field of second language acquisition (SLA) is to describe and explain how second language learners acquire the target language. In order to achieve this goal, SLA researchers work with second language data, which can take a variety of forms, including (but not limited to) such commonly used methods as naturalistic…

  16. The perception of intonation questions and statements in Cantonese.

    PubMed

    Ma, Joan K-Y; Ciocca, Valter; Whitehill, Tara L

    2011-02-01

    In tone languages there are potential conflicts in the perception of lexical tone and intonation, as both depend mainly on the differences in fundamental frequency (F0) patterns. The present study investigated the acoustic cues associated with the perception of sentences as questions or statements in Cantonese, as a function of the lexical tone in sentence final position. Cantonese listeners performed intonation identification tasks involving complete sentences, isolated final syllables, and sentences without the final syllable (carriers). Sensitivity (d' scores) were similar for complete sentences and final syllables but were significantly lower for carriers. Sensitivity was also affected by tone identity. These findings show that the perception of questions and statements relies primarily on the F0 characteristics of the final syllables (local F0 cues). A measure of response bias (c) provided evidence for a general bias toward the perception of statements. Logistic regression analyses showed that utterances were accurately classified as questions or statements by using average F0 and F0 interval. Average F0 of carriers (global F0 cue) was also found to be a reliable secondary cue. These findings suggest that the use of F0 cues for the perception of intonation question in tonal languages is likely to be language-specific.

  17. Real-time functional mapping: potential tool for improving language outcome in pediatric epilepsy surgery

    PubMed Central

    Korostenskaja, Milena; Chen, Po-Ching; Salinas, Christine M.; Westerveld, Michael; Brunner, Peter; Schalk, Gerwin; Cook, Jane C.; Baumgartner, James; Lee, Ki H.

    2015-01-01

    Accurate language localization expands surgical treatment options for epilepsy patients and reduces the risk of postsurgery language deficits. Electrical cortical stimulation mapping (ESM) is considered to be the clinical gold standard for language localization. While ESM affords clinically valuable results, it can be poorly tolerated by children, requires active participation and compliance, carries a risk of inducing seizures, is highly time consuming, and is labor intensive. Given these limitations, alternative and/or complementary functional localization methods such as analysis of electrocorticographic (ECoG) activity in high gamma frequency band in real time are needed to precisely identify eloquent cortex in children. In this case report, the authors examined 1) the use of real-time functional mapping (RTFM) for language localization in a high gamma frequency band derived from ECoG to guide surgery in an epileptic pediatric patient and 2) the relationship of RTFM mapping results to postsurgical language outcomes. The authors found that RTFM demonstrated relatively high sensitivity (75%) and high specificity (90%) when compared with ESM in a “next-neighbor” analysis. While overlapping with ESM in the superior temporal region, RTFM showed a few other areas of activation related to expressive language function, areas that were eventually resected during the surgery. The authors speculate that this resection may be associated with observed postsurgical expressive language deficits. With additional validation in more subjects, this finding would suggest that surgical planning and associated assessment of the risk/benefit ratio would benefit from information provided by RTFM mapping. PMID:24995815

  18. Are First- and Second-Language Factors Related in Predicting Second-Language Reading Comprehension? A Study of Spanish-Speaking Children Acquiring English as a Second Language from First to Second Grade

    ERIC Educational Resources Information Center

    Gottardo, Alexandra; Mueller, Julie

    2009-01-01

    First-language (L1) and 2nd-language (L2) oral language skills and L2 word reading were used as predictors to test the simple view of reading as a model of 2nd-language reading comprehension. The simple view of reading states that reading comprehension is related to decoding and oral language comprehension skills. One hundred thirty-one…

  19. Universal Grammar, Crosslinguistic Variation and Second Language Acquisition

    ERIC Educational Resources Information Center

    White, Lydia

    2012-01-01

    According to generative linguistic theory, certain principles underlying language structure are innately given, accounting for how children are able to acquire their mother tongues (L1s) despite a mismatch between the linguistic input and the complex unconscious mental representation of language that children achieve. This innate structure is…

  20. Acquisition of Malay word recognition skills: lessons from low-progress early readers.

    PubMed

    Lee, Lay Wah; Wheldall, Kevin

    2011-02-01

    Malay is a consistent alphabetic orthography with complex syllable structures. The focus of this research was to investigate word recognition performance in order to inform reading interventions for low-progress early readers. Forty-six Grade 1 students were sampled and 11 were identified as low-progress readers. The results indicated that both syllable awareness and phoneme blending were significant predictors of word recognition, suggesting that both syllable and phonemic grain-sizes are important in Malay word recognition. Item analysis revealed a hierarchical pattern of difficulty based on the syllable and the phonic structure of the words. Error analysis identified the sources of errors to be errors due to inefficient syllable segmentation, oversimplification of syllables, insufficient grapheme-phoneme knowledge and inefficient phonemic code assembly. Evidence also suggests that direct instruction in syllable segmentation, phonemic awareness and grapheme-phoneme correspondence is necessary for low-progress readers to acquire word recognition skills. Finally, a logical sequence to teach grapheme-phoneme decoding in Malay is suggested. Copyright © 2010 John Wiley & Sons, Ltd.

  1. Spatial language facilitates spatial cognition: Evidence from children who lack language input

    PubMed Central

    Gentner, Dedre; Özyürek, Asli; Gürcanli, Özge; Goldin-Meadow, Susan

    2013-01-01

    Does spatial language influence how people think about space? To address this question, we observed children who did not know a conventional language, and tested their performance on nonlinguistic spatial tasks. We studied deaf children living in Istanbul whose hearing losses prevented them from acquiring speech and whose hearing parents had not exposed them to sign. Lacking a conventional language, the children used gestures, called homesigns, to communicate. In Study 1, we asked whether homesigners used gesture to convey spatial relations, and found that they did not. In Study 2, we tested a new group of homesigners on a spatial mapping task, and found that they performed significantly worse than hearing Turkish children who were matched to the deaf children on another cognitive task. The absence of spatial language thus went hand-in-hand with poor performance on the nonlinguistic spatial task, pointing to the importance of spatial language in thinking about space. PMID:23542409

  2. Sleep underpins the plasticity of language production.

    PubMed

    Gaskell, M Gareth; Warker, Jill; Lindsay, Shane; Frost, Rebecca; Guest, James; Snowdon, Reza; Stackhouse, Abigail

    2014-07-01

    The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep. © The Author(s) 2014.

  3. Developing language in a developing body: the relationship between motor development and language development*

    PubMed Central

    Iverson, Jana M.

    2010-01-01

    During the first eighteen months of life, infants acquire and refine a whole set of new motor skills that significantly change the ways in which the body moves in and interacts with the environment. In this review article, I argue that motor acquisitions provide infants with an opportunity to practice skills relevant to language acquisition before they are needed for that purpose; and that the emergence of new motor skills changes infants’ experience with objects and people in ways that are relevant for both general communicative development and the acquisition of language. Implications of this perspective for current views of co-occurring language and motor impairments and for methodology in the field of child language research are also considered. PMID:20096145

  4. Spoken Persuasive Discourse Abilities of Adolescents with Acquired Brain Injury

    ERIC Educational Resources Information Center

    Moran, Catherine; Kirk, Cecilia; Powell, Emma

    2012-01-01

    Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…

  5. What Languages to Include in Curriculum for Muslim Children

    ERIC Educational Resources Information Center

    Musharraf, Muhammad Nabeel

    2015-01-01

    Languages are tools that connect people globally and help them acquire knowledge. It is a highly critical decision to choose a language or a set of languages for inclusion in curriculum in a manner that would be most productive at personal, community and national level. What we need to see in our next generation has to be "sowed the seeds…

  6. Learning bias, cultural evolution of language, and the biological evolution of the language faculty.

    PubMed

    Smith, Kenny

    2011-04-01

    The biases of individual language learners act to determine the learnability and cultural stability of languages: learners come to the language learning task with biases which make certain linguistic systems easier to acquire than others. These biases are repeatedly applied during the process of language transmission, and consequently should effect the types of languages we see in human populations. Understanding the cultural evolutionary consequences of particular learning biases is therefore central to understanding the link between language learning in individuals and language universals, common structural properties shared by all the world’s languages. This paper reviews a range of models and experimental studies which show that weak biases in individual learners can have strong effects on the structure of socially learned systems such as language, suggesting that strong universal tendencies in language structure do not require us to postulate strong underlying biases or constraints on language learning. Furthermore, understanding the relationship between learner biases and language design has implications for theories of the evolution of those learning biases: models of gene-culture coevolution suggest that, in situations where a cultural dynamic mediates between properties of individual learners and properties of language in this way, biological evolution is unlikely to lead to the emergence of strong constraints on learning.

  7. Comparisons of lesion detectability in ultrasound images acquired using time-shift compensation and spatial compounding.

    PubMed

    Lacefield, James C; Pilkington, Wayne C; Waag, Robert C

    2004-12-01

    The effects of aberration, time-shift compensation, and spatial compounding on the discrimination of positive-contrast lesions in ultrasound b-scan images are investigated using a two-dimensional (2-D) array system and tissue-mimicking phantoms. Images were acquired within an 8.8 x 12-mm2 field of view centered on one of four statistically similar 4-mm diameter spherical lesions. Each lesion was imaged in four planes offset by successive 45 degree rotations about the central scan line. Images of the lesions were acquired using conventional geometric focusing through a water path, geometric focusing through a 35-mm thick distributed aberration phantom, and time-shift compensated transmit and receive focusing through the aberration phantom. The views of each lesion were averaged to form sets of water path, aberrated, and time-shift compensated 4:1 compound images and 16:1 compound images. The contrast ratio and detectability index of each image were computed to assess lesion differentiation. In the presence of aberration representative of breast or abdominal wall tissue, time-shift compensation provided statistically significant improvements of contrast ratio but did not consistently affect the detectability index, and spatial compounding significantly increased the detectability index but did not alter the contrast ratio. Time-shift compensation and spatial compounding thus provide complementary benefits to lesion detection.

  8. Typhoid fever acquired in the United States, 1999-2010: epidemiology, microbiology, and use of a space-time scan statistic for outbreak detection.

    PubMed

    Imanishi, M; Newton, A E; Vieira, A R; Gonzalez-Aviles, G; Kendall Scott, M E; Manikonda, K; Maxwell, T N; Halpin, J L; Freeman, M M; Medalla, F; Ayers, T L; Derado, G; Mahon, B E; Mintz, E D

    2015-08-01

    Although rare, typhoid fever cases acquired in the United States continue to be reported. Detection and investigation of outbreaks in these domestically acquired cases offer opportunities to identify chronic carriers. We searched surveillance and laboratory databases for domestically acquired typhoid fever cases, used a space-time scan statistic to identify clusters, and classified clusters as outbreaks or non-outbreaks. From 1999 to 2010, domestically acquired cases accounted for 18% of 3373 reported typhoid fever cases; their isolates were less often multidrug-resistant (2% vs. 15%) compared to isolates from travel-associated cases. We identified 28 outbreaks and two possible outbreaks within 45 space-time clusters of ⩾2 domestically acquired cases, including three outbreaks involving ⩾2 molecular subtypes. The approach detected seven of the ten outbreaks published in the literature or reported to CDC. Although this approach did not definitively identify any previously unrecognized outbreaks, it showed the potential to detect outbreaks of typhoid fever that may escape detection by routine analysis of surveillance data. Sixteen outbreaks had been linked to a carrier. Every case of typhoid fever acquired in a non-endemic country warrants thorough investigation. Space-time scan statistics, together with shoe-leather epidemiology and molecular subtyping, may improve outbreak detection.

  9. First language acquisition differs from second language acquisition in prelingually deaf signers: evidence from sensitivity to grammaticality judgement in British Sign Language.

    PubMed

    Cormier, Kearsy; Schembri, Adam; Vinson, David; Orfanidou, Eleni

    2012-07-01

    Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Language-specific memory for everyday arithmetic facts in Chinese-English bilinguals.

    PubMed

    Chen, Yalin; Yanke, Jill; Campbell, Jamie I D

    2016-04-01

    The role of language in memory for arithmetic facts remains controversial. Here, we examined transfer of memory training for evidence that bilinguals may acquire language-specific memory stores for everyday arithmetic facts. Chinese-English bilingual adults (n = 32) were trained on different subsets of simple addition and multiplication problems. Each operation was trained in one language or the other. The subsequent test phase included all problems with addition and multiplication alternating across trials in two blocks, one in each language. Averaging over training language, the response time (RT) gains for trained problems relative to untrained problems were greater in the trained language than in the untrained language. Subsequent analysis showed that English training produced larger RT gains for trained problems relative to untrained problems in English at test relative to the untrained Chinese language. In contrast, there was no evidence with Chinese training that problem-specific RT gains differed between Chinese and the untrained English language. We propose that training in Chinese promoted a translation strategy for English arithmetic (particularly multiplication) that produced strong cross-language generalization of practice, whereas training in English strengthened relatively weak, English-language arithmetic memories and produced little generalization to Chinese (i.e., English training did not induce an English translation strategy for Chinese language trials). The results support the existence of language-specific strengthening of memory for everyday arithmetic facts.

  11. An Advantage for Perceptual Edges in Young Infants' Memory for Speech

    ERIC Educational Resources Information Center

    Hochmann, Jean-Rémy; Langus, Alan; Mehler, Jacques

    2016-01-01

    Models of language acquisition are constrained by the information that learners can extract from their input. Experiment 1 investigated whether 3-month-old infants are able to encode a repeated, unsegmented sequence of five syllables. Event-related-potentials showed that infants reacted to a change of the initial or the final syllable, but not to…

  12. Communication Strategies in English as a Second Language (ESL) Context

    ERIC Educational Resources Information Center

    Putri, Lidya Ayuni

    2013-01-01

    Communication is important for people around the world. People try to communicate to other people around the globe using language. In understanding the differences of some languages around the world, people need to learn the language of other people they try to communicate with, for example Indonesian people learn to acquire English. In the…

  13. Language and Literacy in Bilingual Children. Child Language and Child Development.

    ERIC Educational Resources Information Center

    Oller, D. Kimbrough, Ed.; Eilers, Rebecca E., Ed.

    This collection of papers reports research on the effects of bilingual learning on the ability to speak two languages and the ability to acquire full literacy in both. There are 12 chapters in 4 parts. Part 1, "Background," includes (1) "Assessing the Effects of Bilingualism: A Background" (D. Kimbrough Oller and Barbara Zurer…

  14. Statistical learning and language acquisition

    PubMed Central

    Romberg, Alexa R.; Saffran, Jenny R.

    2011-01-01

    Human learners, including infants, are highly sensitive to structure in their environment. Statistical learning refers to the process of extracting this structure. A major question in language acquisition in the past few decades has been the extent to which infants use statistical learning mechanisms to acquire their native language. There have been many demonstrations showing infants’ ability to extract structures in linguistic input, such as the transitional probability between adjacent elements. This paper reviews current research on how statistical learning contributes to language acquisition. Current research is extending the initial findings of infants’ sensitivity to basic statistical information in many different directions, including investigating how infants represent regularities, learn about different levels of language, and integrate information across situations. These current directions emphasize studying statistical language learning in context: within language, within the infant learner, and within the environment as a whole. PMID:21666883

  15. Electrophysiological Investigations of Second Language Word Learning, Attrition and Bilingual Processing

    ERIC Educational Resources Information Center

    Pitkanen, Ilona

    2010-01-01

    The research presented in this dissertation examined changes in brain activity associated with learning, forgetting and using a second language. The first experiment investigated the changes that occur when novice adult second language learners acquire and forget second language words. Event-related brain potentials were measured while native…

  16. Diminutives facilitate word segmentation in natural speech: cross-linguistic evidence.

    PubMed

    Kempe, Vera; Brooks, Patricia J; Gillis, Steven; Samson, Graham

    2007-06-01

    Final-syllable invariance is characteristic of diminutives (e.g., doggie), which are a pervasive feature of the child-directed speech registers of many languages. Invariance in word endings has been shown to facilitate word segmentation (Kempe, Brooks, & Gillis, 2005) in an incidental-learning paradigm in which synthesized Dutch pseudonouns were used. To broaden the cross-linguistic evidence for this invariance effect and to increase its ecological validity, adult English speakers (n=276) were exposed to naturally spoken Dutch or Russian pseudonouns presented in sentence contexts. A forced choice test was given to assess target recognition, with foils comprising unfamiliar syllable combinations in Experiments 1 and 2 and syllable combinations straddling word boundaries in Experiment 3. A control group (n=210) received the recognition test with no prior exposure to targets. Recognition performance improved with increasing final-syllable rhyme invariance, with larger increases for the experimental group. This confirms that word ending invariance is a valid segmentation cue in artificial, as well as naturalistic, speech and that diminutives may aid segmentation in a number of languages.

  17. Validation of the Acoustic Voice Quality Index Version 03.01 and the Acoustic Breathiness Index in the Spanish language.

    PubMed

    Delgado Hernández, Jonathan; León Gómez, Nieves M; Jiménez, Alejandra; Izquierdo, Laura M; Barsties V Latoszek, Ben

    2018-05-01

    The aim of this study was to validate the Acoustic Voice Quality Index 03.01 (AVQIv3) and the Acoustic Breathiness Index (ABI) in the Spanish language. Concatenated voice samples of continuous speech (cs) and sustained vowel (sv) from 136 subjects with dysphonia and 47 vocally healthy subjects were perceptually judged for overall voice quality and breathiness severity. First, to reach a higher level of ecological validity, the proportions of cs and sv were equalized regarding the time length of 3 seconds sv part and voiced cs part, respectively. Second, concurrent validity and diagnostic accuracy were verified. A moderate reliability of overall voice quality and breathiness severity from 5 experts was used. It was found that 33 syllables as standardization of the cs part, which represents 3 seconds of voiced cs, allows the equalization of both speech tasks. A strong correlation was revealed between AVQIv3 and overall voice quality and ABI and perceived breathiness severity. Additionally, the best diagnostic outcome was identified at a threshold of 2.28 and 3.40 for AVQIv3 and ABI, respectively. The AVQIv3 and ABI showed in the Spanish language valid and robust results to quantify abnormal voice qualities regarding overall voice quality and breathiness severity.

  18. Approaching sign language test construction: adaptation of the German sign language receptive skills test.

    PubMed

    Haug, Tobias

    2011-01-01

    There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired in preschool- and school-aged children (4-8 years old) is urgently needed. Using the British Sign Language Receptive Skills Test, that has been standardized and has sound psychometric properties, as a template for adaptation thus provides a starting point for tests of a sign language that is less documented, such as DGS. This article makes a novel contribution to the field by examining linguistic, cultural, and methodological issues in the process of adapting a test from the source language to the target language. The adapted DGS test has sound psychometric properties and provides the basis for revision prior to standardization. © The Author 2011. Published by Oxford University Press. All rights reserved.

  19. Precursors to Natural Grammar Learning: Preliminary Evidence from 4-Month-Old Infants

    PubMed Central

    Friederici, Angela D.; Mueller, Jutta L.; Oberecker, Regine

    2011-01-01

    When learning a new language, grammar—although difficult—is very important, as grammatical rules determine the relations between the words in a sentence. There is evidence that very young infants can detect rules determining the relation between neighbouring syllables in short syllable sequences. A critical feature of all natural languages, however, is that many grammatical rules concern the dependency relation between non-neighbouring words or elements in a sentence i.e. between an auxiliary and verb inflection as in is singing. Thus, the issue of when and how children begin to recognize such non-adjacent dependencies is fundamental to our understanding of language acquisition. Here, we use brain potential measures to demonstrate that the ability to recognize dependencies between non-adjacent elements in a novel natural language is observable by the age of 4 months. Brain responses indicate that 4-month-old German infants discriminate between grammatical and ungrammatical dependencies in auditorily presented Italian sentences after only brief exposure to correct sentences of the same type. As the grammatical dependencies are realized by phonologically distinct syllables the present data most likely reflect phonologically based implicit learning mechanisms which can serve as a precursor to later grammar learning. PMID:21445341

  20. The Development of Early Childhood Teachers' Language Knowledge in Different Educational Tracks

    ERIC Educational Resources Information Center

    Strohmer, Janina; Mischo, Christoph

    2015-01-01

    Early childhood teachers should have extensive knowledge about language and language development, because these facets of professional knowledge are considered as important requirements for fostering language development in early childhood education settings. It is assumed that early childhood teachers acquire this knowledge during pre-service…

  1. Acquiring the optimal time for hyperbaric therapy in the rat model of CFA induced arthritis.

    PubMed

    Koo, Sung Tae; Lee, Chang-Hyung; Shin, Yong Il; Ko, Hyun Yoon; Lee, Da Gyo; Jeong, Han-Sol

    2014-01-01

    We previously published an article about the pressure effect using a rheumatoid animal model. Hyperbaric therapy appears to be beneficial in treating rheumatoid arthritis (RA) by reducing the inflammatory process in an animal model. In this sense, acquiring the optimal pressure-treatment time parameter for RA is important and no optimal hyperbaric therapy time has been suggested up to now. The purpose of our study was to acquire the optimal time for hyperbaric therapy in the RA rat model. Controlled animal study. Following injection of complete Freund's adjuvant (CFA) into one side of the knee joint, 32 rats were randomly assigned to 3 different time groups (1, 3, 5 hours a day) under 1.5 atmospheres absolute (ATA) hyperbaric chamber for 12 days. The pain levels were assessed daily for 2 weeks by weight bearing force (WBF) of the affected limb. In addition, the levels of gelatinase, MMP-2, and MMP-9 expression in the synovial fluids of the knees were analyzed. The reduction of WBF was high at 2 days after injection and then it was spontaneously increased up to 14 days in all 3 groups. There were significant differences of WBF between 5 hours and control during the third through twelfth days, between 3 hours and control during the third through fifth and tenth through twelfth days, and between 3 hours and 5 hours during the third through seventh days (P < 0.05). The MMP-9/MMP-2 ratio increased at 14 days after the CFA injection in all groups compared to the initial findings, however, the 3 hour group showed a smaller MMP-9/MMP-2 ratio than the control group. Although enough samples were used for the study to support our hypothesis, more samples will be needed to raise the validity and reliability. The effect of hyperbaric treatment appears to be dependent upon the elevated therapy time under 1.5 ATA pressure for a short period of time; however, the long-term effects were similar in all pressure groups. Further study will be needed to acquire the optimal pressure

  2. The proper treatment of language acquisition and change in a population setting.

    PubMed

    Niyogi, Partha; Berwick, Robert C

    2009-06-23

    Language acquisition maps linguistic experience, primary linguistic data (PLD), onto linguistic knowledge, a grammar. Classically, computational models of language acquisition assume a single target grammar and one PLD source, the central question being whether the target grammar can be acquired from the PLD. However, real-world learners confront populations with variation, i.e., multiple target grammars and PLDs. Removing this idealization has inspired a new class of population-based language acquisition models. This paper contrasts 2 such models. In the first, iterated learning (IL), each learner receives PLD from one target grammar but different learners can have different targets. In the second, social learning (SL), each learner receives PLD from possibly multiple targets, e.g., from 2 parents. We demonstrate that these 2 models have radically different evolutionary consequences. The IL model is dynamically deficient in 2 key respects. First, the IL model admits only linear dynamics and so cannot describe phase transitions, attested rapid changes in languages over time. Second, the IL model cannot properly describe the stability of languages over time. In contrast, the SL model leads to nonlinear dynamics, bifurcations, and possibly multiple equilibria and so suffices to model both the case of stable language populations, mixtures of more than 1 language, as well as rapid language change. The 2 models also make distinct, empirically testable predictions about language change. Using historical data, we show that the SL model more faithfully replicates the dynamics of the evolution of Middle English.

  3. Gains to Language Learners from Viewing Target Language Closed-Captioned Films

    ERIC Educational Resources Information Center

    Stewart, Melissa A.; Pertusa, Inmaculada

    2004-01-01

    In an effort to facilitate students' understanding of films in the target language, many instructors turn to films with English subtitles. Viewing films subtitled in English does not encourage learners to use their previously acquired listening skills, but rather allows them to rely on reading English instead of making the extra effort required to…

  4. Nonvocal language acquisition in adolescents with severe physical disabilities: Bliss symbol versus iconic stimulus formats.

    PubMed Central

    Hurlbut, B I; Iwata, B A; Green, J D

    1982-01-01

    This study compared training in two language systems for three severely handicapped, nonvocal adolescents: the Bliss symbol system and an iconic picture system. Following baseline, training and review trials were implemented using an alternating treatments design. Daily probes were conducted to assess maintenance, stimulus generalization, and response generalization, and data were collected on spontaneous usage of either language system throughout the school day. Results showed that students required approximately four times as many trials to acquire Bliss symbols as iconic pictures, and that students maintained a higher percentage of iconic pictures. Stimulus generalization occurred in both language systems, while the number of correct responses during responses generalization probes was much greater for the iconic system. Finally, students almost always showed more iconic responses than Bliss responses in daily spontaneous usage. These results suggest that an iconic system might be more readily spontaneous usage. These results suggest than an iconic system might be more readily acquired, maintained, and generalized to daily situations. Implications of these findings for the newly verbal person were discussed. PMID:6181049

  5. Verbal Positional Memory in 7-Month-Olds

    ERIC Educational Resources Information Center

    Benavides-Varela, Silvia; Mehler, Jacques

    2015-01-01

    Verbal memory is a fundamental prerequisite for language learning. This study investigated 7-month-olds' (N = 62) ability to remember the identity and order of elements in a multisyllabic word. The results indicate that infants detect changes in the order of edge syllables, or the identity of the middle syllables, but fail to encode the order…

  6. A Review of Integrating Mobile Phones for Language Learning

    ERIC Educational Resources Information Center

    Darmi, Ramiza; Albion, Peter

    2014-01-01

    Mobile learning (m-learning) is gradually being introduced in language classrooms. All forms of mobile technology represent portability with smarter features. Studies have proven the concomitant role of technology beneficial for language learning. Various features in the technology have been exploited and researched for acquiring and learning…

  7. The contribution of children's time-specific and longitudinal expressive language skills on developmental trajectories of executive function.

    PubMed

    Kuhn, Laura J; Willoughby, Michael T; Vernon-Feagans, Lynne; Blair, Clancy B

    2016-08-01

    To investigate whether children's early language skills support the development of executive functions (EFs), the current study used an epidemiological sample (N=1121) to determine whether two key language indicators, vocabulary and language complexity, were predictive of EF abilities over the preschool years. We examined vocabulary and language complexity both as time-varying covariates that predicted time-specific indicators of EF at 36 and 60 months of age and as time-invariant covariates that predicted children's EF at 60 months and change in EF from 36 to 60 months. We found that the rate of change in children's vocabulary between 15 and 36 months was associated with both the trajectory of EF from 36 to 60 months and the resulting abilities at 60 months. In contrast, children's language complexity had a time-specific association with EF only at 60 months. These findings suggest that children's early gains in vocabulary may be particularly relevant for emerging EF abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Using Short Texts to Teach English as Second Language: An Integrated Approach

    ERIC Educational Resources Information Center

    Kembo, Jane

    2016-01-01

    The teacher of English Language is often hard pressed to find interesting and authentic ways to present language to target second language speakers. While language can be taught and learned, part of it must be acquired and short texts provide powerful tools for doing so and reinforcing what has been taught/learned. This paper starts from research,…

  9. Genetic biasing through cultural transmission: do simple Bayesian models of language evolution generalize?

    PubMed

    Dediu, Dan

    2009-08-07

    The recent Bayesian approaches to language evolution and change seem to suggest that genetic biases can impact on the characteristics of language, but, at the same time, that its cultural transmission can partially free it from these same genetic constraints. One of the current debates centres on the striking differences between sampling and a posteriori maximising Bayesian learners, with the first converging on the prior bias while the latter allows a certain freedom to language evolution. The present paper shows that this difference disappears if populations more complex than a single teacher and a single learner are considered, with the resulting behaviours more similar to the sampler. This suggests that generalisations based on the language produced by Bayesian agents in such homogeneous single agent chains are not warranted. It is not clear which of the assumptions in such models are responsible, but these findings seem to support the rising concerns on the validity of the "acquisitionist" assumption, whereby the locus of language change and evolution is taken to be the first language acquirers (children) as opposed to the competent language users (the adults).

  10. Language acquisition from a biolinguistic perspective.

    PubMed

    Crain, Stephen; Koring, Loes; Thornton, Rosalind

    2017-10-01

    This paper describes the biolinguistic approach to language acquisition. We contrast the biolinguistic approach with a usage-based approach. We argue that the biolinguistic approach is superior because it provides more accurate and more extensive generalizations about the properties of human languages, as well as a better account of how children acquire human languages. To distinguish between these accounts, we focus on how child and adult language differ both in sentence production and in sentence understanding. We argue that the observed differences resist explanation using the cognitive mechanisms that are invoked by the usage-based approach. In contrast, the biolinguistic approach explains the qualitative parametric differences between child and adult language. Explaining how child and adult language differ and demonstrating that children perceive unity despite apparent diversity are two of the hallmarks of the biolinguistic approach to language acquisition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Learner Agency and Its Effect on Spoken Interaction Time in the Target Language

    ERIC Educational Resources Information Center

    Knight, Janine; Barberà, Elena

    2017-01-01

    This paper presents the results of how four dyads in an online task-based synchronous computer-mediated (TB-SCMC) interaction event use their agency to carry out speaking tasks, and how their choices and actions affect time spent interacting in the target language. A case study approach was employed to analyse the language functions and cognitive…

  12. Acquiring Word Class Distinctions in American Sign Language: Evidence from Handshape

    ERIC Educational Resources Information Center

    Brentari, Diane; Coppola, Marie; Jung, Ashley; Goldin-Meadow, Susan

    2013-01-01

    Handshape works differently in nouns versus a class of verbs in American Sign Language (ASL) and thus can serve as a cue to distinguish between these two word classes. Handshapes representing characteristics of the object itself ("object" handshapes) and handshapes representing how the object is handled ("handling" handshapes)…

  13. The Role of the Secondary Stress in Teaching the English Rhythm

    ERIC Educational Resources Information Center

    Yurtbasi, Metin

    2017-01-01

    In the phonological literature in English, which is a stress-timed language, the existence of at least three levels of stress is usually taken for granted. Words, phrases, utterances or sentences have a prominent element in one of their syllables, which usually correlates with a partner in the same unit, called the secondary stress. It so happens…

  14. Assessing English Language Learners' Opportunity to Learn Mathematics: Issues and Limitations

    ERIC Educational Resources Information Center

    Abedi, Jamal; Herman, Joan

    2010-01-01

    Background/Context: English language learner (ELL) students are lagging behind because of the extra challenges they face relative to their peers in acquiring academic English language proficiency, and the added burden of learning content in a language in which they are not proficient. The mandated inclusion of ELL students in the nation's…

  15. Addressing the Assessment Dilemma of Additional Language Learners through Dynamic Assessment

    ERIC Educational Resources Information Center

    Omidire, M. F.; Bouwer, A. C.; Jordaan, J. C.

    2011-01-01

    Many learners with an additional language (AL) as their language of learning and teaching (LoLT) have not acquired the level of proficiency required for them to demonstrate their knowledge and achieve the desired outcome on assessment tasks given in that language. Using instruments designed for fully fluent learners and covertly including…

  16. Bilingual Language Acquisition in a Minority Context: Using the Irish-English Communicative Development Inventory to Track Acquisition of an Endangered Language

    ERIC Educational Resources Information Center

    O'Toole, Ciara; Hickey, Tina M.

    2017-01-01

    This study investigated the role of language exposure in vocabulary acquisition in Irish, a threatened minority language in Ireland which is usually acquired with English in a bilingual context. Using a bilingual Irish-English adaptation of the MacArthur-Bates Communicative Development Inventories) [Fenson, L., V. A. Marchman, D. J. Thal, P. S.…

  17. Motor pathway convergence predicts syllable repertoire size in oscine birds

    PubMed Central

    Moore, Jordan M.; Székely, Tamás; Büki, József; DeVoogd, Timothy J.

    2011-01-01

    Behavioral specializations are frequently associated with expansions of the brain regions controlling them. This principle of proper mass spans sensory, motor, and cognitive abilities and has been observed in a wide variety of vertebrate species. Yet, it is unknown if this concept extrapolates to entire neural pathways or how selection on a behavioral capacity might otherwise shape circuit structure. We investigate these questions by comparing the songs and neuroanatomy of 49 species from 17 families of songbirds, which vary immensely in the number of unique song components they produce and possess a conserved neural network dedicated to this behavior. We find that syllable repertoire size is strongly related to the degree of song motor pathway convergence. Repertoire size is more accurately predicted by the number of neurons in higher motor areas relative to that in their downstream targets than by the overall number of neurons in the song motor pathway. Additionally, the convergence values along serial premotor and primary motor projections account for distinct portions of the behavioral variation. These findings suggest that selection on song has independently shaped different components of this hierarchical pathway, and they elucidate how changes in pathway structure could have underlain elaborations of this learned motor behavior. PMID:21918109

  18. Building a Language-Focused Curriculum for the Preschool Classroom. Volume II: A Planning Guide.

    ERIC Educational Resources Information Center

    Bunce, Betty H.

    A language-focused curriculum emphasizes the development of language skills as a key to learning. The curriculum is designed to be appropriate for 3- to 5-year-olds, whether having difficulty acquiring language, developing language skills at a typical rate, or learning English as a second language. Intended to accompany the first volume's…

  19. Typhoid fever acquired in the United States, 1999–2010: epidemiology, microbiology, and use of a space–time scan statistic for outbreak detection

    PubMed Central

    IMANISHI, M.; NEWTON, A. E.; VIEIRA, A. R.; GONZALEZ-AVILES, G.; KENDALL SCOTT, M. E.; MANIKONDA, K.; MAXWELL, T. N.; HALPIN, J. L.; FREEMAN, M. M.; MEDALLA, F.; AYERS, T. L.; DERADO, G.; MAHON, B. E.; MINTZ, E. D.

    2016-01-01

    SUMMARY Although rare, typhoid fever cases acquired in the United States continue to be reported. Detection and investigation of outbreaks in these domestically acquired cases offer opportunities to identify chronic carriers. We searched surveillance and laboratory databases for domestically acquired typhoid fever cases, used a space–time scan statistic to identify clusters, and classified clusters as outbreaks or non-outbreaks. From 1999 to 2010, domestically acquired cases accounted for 18% of 3373 reported typhoid fever cases; their isolates were less often multidrug-resistant (2% vs. 15%) compared to isolates from travel-associated cases. We identified 28 outbreaks and two possible outbreaks within 45 space–time clusters of ⩾2 domestically acquired cases, including three outbreaks involving ⩾2 molecular subtypes. The approach detected seven of the ten outbreaks published in the literature or reported to CDC. Although this approach did not definitively identify any previously unrecognized outbreaks, it showed the potential to detect outbreaks of typhoid fever that may escape detection by routine analysis of surveillance data. Sixteen outbreaks had been linked to a carrier. Every case of typhoid fever acquired in a non-endemic country warrants thorough investigation. Space–time scan statistics, together with shoe-leather epidemiology and molecular subtyping, may improve outbreak detection. PMID:25427666

  20. Building flexible real-time systems using the Flex language

    NASA Technical Reports Server (NTRS)

    Kenny, Kevin B.; Lin, Kwei-Jay

    1991-01-01

    The design and implementation of a real-time programming language called Flex, which is a derivative of C++, are presented. It is shown how different types of timing requirements might be expressed and enforced in Flex, how they might be fulfilled in a flexible way using different program models, and how the programming environment can help in making binding and scheduling decisions. The timing constraint primitives in Flex are easy to use yet powerful enough to define both independent and relative timing constraints. Program models like imprecise computation and performance polymorphism can carry out flexible real-time programs. In addition, programmers can use a performance measurement tool that produces statistically correct timing models to predict the expected execution time of a program and to help make binding decisions. A real-time programming environment is also presented.

  1. Validity and reliability of four language mapping paradigms.

    PubMed

    Wilson, Stephen M; Bautista, Alexa; Yen, Melodie; Lauderdale, Stefanie; Eriksson, Dana K

    2017-01-01

    Language areas of the brain can be mapped in individual participants with functional MRI. We investigated the validity and reliability of four language mapping paradigms that may be appropriate for individuals with acquired aphasia: sentence completion, picture naming, naturalistic comprehension, and narrative comprehension. Five neurologically normal older adults were scanned on each of the four paradigms on four separate occasions. Validity was assessed in terms of whether activation patterns reflected the known typical organization of language regions, that is, lateralization to the left hemisphere, and involvement of the left inferior frontal gyrus and the left middle and/or superior temporal gyri. Reliability (test-retest reproducibility) was quantified in terms of the Dice coefficient of similarity, which measures overlap of activations across time points. We explored the impact of different absolute and relative voxelwise thresholds, a range of cluster size cutoffs, and limitation of analyses to a priori potential language regions. We found that the narrative comprehension and sentence completion paradigms offered the best balance of validity and reliability. However, even with optimal combinations of analysis parameters, there were many scans on which known features of typical language organization were not demonstrated, and test-retest reproducibility was only moderate for realistic parameter choices. These limitations in terms of validity and reliability may constitute significant limitations for many clinical or research applications that depend on identifying language regions in individual participants.

  2. Improving Pronunciation Instruction in the Second Language Classroom

    ERIC Educational Resources Information Center

    Counselman, David

    2010-01-01

    Researchers in second language acquisition (SLA) have increasingly discussed the role that attention plays in the learning of a second language (L2). This discussion has led to research on proposed pedagogical strategies aimed at directing L2 learners' attention to aspects of the L2 grammar that are difficult to learn or acquire. Research on one…

  3. Clinical Impact and Implication of Real-Time Oscillation Analysis for Language Mapping.

    PubMed

    Ogawa, Hiroshi; Kamada, Kyousuke; Kapeller, Christoph; Prueckl, Robert; Takeuchi, Fumiya; Hiroshima, Satoru; Anei, Ryogo; Guger, Christoph

    2017-01-01

    We developed a functional brain analysis system that enabled us to perform real-time task-related electrocorticography (ECoG) and evaluated its potential in clinical practice. We hypothesized that high gamma activity (HGA) mapping would provide better spatial and temporal resolution with high signal-to-noise ratios. Seven awake craniotomy patients were evaluated. ECoG was recorded during language tasks using subdural grids, and HGA (60-170 Hz) maps were obtained in real time. The patients also underwent electrocortical stimulation (ECS) mapping to validate the suspected functional locations on HGA mapping. The results were compared and calculated to assess the sensitivity and specificity of HGA mapping. For reference, bedside HGA-ECS mapping was performed in 5 epilepsy patients. HGA mapping demonstrated functional brain areas in real time and was comparable with ECS mapping. Sensitivity and specificity for the language area were 90.1% ± 11.2% and 90.0% ± 4.2%, respectively. Most HGA-positive areas were consistent with ECS-positive regions in both groups, and there were no statistical between-group differences. Although this study included a small number of subjects, it showed real-time HGA mapping with the same setting and tasks under different conditions. This study demonstrates the clinical feasibility of real-time HGA mapping. Real-time HGA mapping enabled simple and rapid detection of language functional areas in awake craniotomy. The mapping results were highly accurate, although the mapping environment was noisy. Further studies of HGA mapping may provide the potential to elaborate complex brain functions and networks. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Children with Specific Language Impairment and Their Families: A Future View of Nature Plus Nurture and New Technologies for Comprehensive Language Intervention Strategies.

    PubMed

    Rice, Mabel L

    2016-11-01

    Future perspectives on children with language impairments are framed from what is known about children with specific language impairment (SLI). A summary of the current state of services is followed by discussion of how these children can be overlooked and misunderstood and consideration of why it is so hard for some children to acquire language when it is effortless for most children. Genetic influences are highlighted, with the suggestion that nature plus nurture should be considered in present as well as future intervention approaches. A nurture perspective highlights the family context of the likelihood of SLI for some of the children. Future models of the causal pathways may provide more specific information to guide gene-treatment decisions, in ways parallel to current personalized medicine approaches. Future treatment options can build on the potential of electronic technologies and social media to provide personalized treatment methods available at a time and place convenient for the person to use as often as desired. The speech-language pathologist could oversee a wide range of treatment options and monitor evidence provided electronically to evaluate progress and plan future treatment steps. Most importantly, future methods can provide lifelong language acquisition activities that maintain the privacy and dignity of persons with language impairment, and in so doing will in turn enhance the effectiveness of speech-language pathologists. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  5. Modulation of Language Switching by Cue Timing: Implications for Models of Bilingual Language Control

    ERIC Educational Resources Information Center

    Khateb, Asaid; Shamshoum, Rana; Prior, Anat

    2017-01-01

    The current study examines the interplay between global and local processes in bilingual language control. We investigated language-switching performance of unbalanced Arabic-Hebrew bilinguals in cued picture naming, using 5 different cuing parameters. The language cue could precede the picture, follow it, or appear simultaneously with it. Naming…

  6. Language and Ageing--Exploring Propositional Density in Written Language--Stability over Time

    ERIC Educational Resources Information Center

    Spencer, Elizabeth; Craig, Hugh; Ferguson, Alison; Colyvas, Kim

    2012-01-01

    This study investigated the stability of propositional density (PD) in written texts, as this aspect of language shows promise as an indicator and as a predictor of language decline with ageing. This descriptive longitudinal study analysed written texts obtained from the Australian Longitudinal Study of Women's Health in which participants were…

  7. An examination of the language construct in NIMH's research domain criteria: Time for reconceptualization!

    PubMed Central

    Wolters, Maria K.; Whalley, Heather C.; Gountouna, Viktoria‐Eleni; Kuznetsova, Ksenia A.; Watson, Andrew R.

    2016-01-01

    The National Institute of Mental Health's Research Domain Criteria (RDoC) Initiative “calls for the development of new ways of classifying psychopathology based on dimensions of observable behavior.” As a result of this ambitious initiative, language has been identified as an independent construct in the RDoC matrix. In this article, we frame language within an evolutionary and neuropsychological context and discuss some of the limitations to the current measurements of language. Findings from genomics and the neuroimaging of performance during language tasks are discussed in relation to serious mental illness and within the context of caveats regarding measuring language. Indeed, the data collection and analysis methods employed to assay language have been both aided and constrained by the available technologies, methodologies, and conceptual definitions. Consequently, different fields of language research show inconsistent definitions of language that have become increasingly broad over time. Individually, they have also shown significant improvements in conceptual resolution, as well as in experimental and analytic techniques. More recently, language research has embraced collaborations across disciplines, notably neuroscience, cognitive science, and computational linguistics and has ultimately re‐defined classical ideas of language. As we move forward, the new models of language with their remarkably multifaceted constructs force a re‐examination of the NIMH RDoC conceptualization of language and thus the neuroscience and genetics underlying this concept. © 2016 The Authors. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics Published by Wiley Periodicals, Inc. PMID:26968151

  8. Real-time processing of ASL signs: Delayed first language acquisition affects organization of the mental lexicon

    PubMed Central

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2014-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. PMID:25528091

  9. The Time Course of Morphological Processing in a Second Language

    ERIC Educational Resources Information Center

    Clahsen, Harald; Balkhair, Loay; Schutter, John-Sebastian; Cunnings, Ian

    2013-01-01

    We report findings from psycholinguistic experiments investigating the detailed timing of processing morphologically complex words by proficient adult second (L2) language learners of English in comparison to adult native (L1) speakers of English. The first study employed the masked priming technique to investigate "-ed" forms with a group of…

  10. Receptive language and educational attainment for sexually abused females.

    PubMed

    Noll, Jennie G; Shenk, Chad E; Yeh, Michele T; Ji, Juye; Putnam, Frank W; Trickett, Penelope K

    2010-09-01

    The objective of this study was to test whether the experience of childhood sexual abuse is associated with long-term receptive language acquisition and educational attainment deficits for females. Females with substantiated familial childhood sexual abuse (n=84) and a nonabused comparison group (n=102) were followed prospectively for 18 years. Receptive language ability was assessed at 6 time points across distinct stages of development, including childhood, adolescence, and young adulthood. Rates of high school graduation and total educational attainment were assessed during young adulthood. Hierarchical linear modeling revealed that receptive language did not differ between the groups at the initial assessment point in childhood; however, a significant group by time interaction was observed across development with abused females (1) acquiring receptive language at a significantly slower rate throughout development and (2) achieving a lower overall maximum level of proficiency. Significant differences in receptive language scores emerged as early as midadolescence. In addition, abused females reported significantly lower rates of high school graduation and lower overall educational attainment when compared with their nonabused peers. Exposure to childhood sexual abuse may be a significant risk factor for cognitive performance and achievement deficits for victims. These findings have particular public health relevance given the high prevalence of sexual abuse and that poor cognitive functioning and low levels of educational attainment can contribute to continued adversity throughout the life course. Early intervention may assist victims in improving cognitive functioning, altering deleterious trajectories, and promoting greater life successes.

  11. Revisiting First Language Acquisition through Empirical and Rational Perspectives

    ERIC Educational Resources Information Center

    Tahriri, Abdorreza

    2012-01-01

    Acquisition in general and first language acquisition in particular is a very complex and a multifaceted phenomenon. The way that children acquire a language in a very limited period is astonishing. Various approaches have been proposed so far to account for this extraordinary phenomenon. These approaches are indeed based on various philosophical…

  12. Mixing Languages during Learning? Testing the One Subject-One Language Rule.

    PubMed

    Antón, Eneko; Thierry, Guillaume; Duñabeitia, Jon Andoni

    2015-01-01

    In bilingual communities, mixing languages is avoided in formal schooling: even if two languages are used on a daily basis for teaching, only one language is used to teach each given academic subject. This tenet known as the one subject-one language rule avoids mixing languages in formal schooling because it may hinder learning. The aim of this study was to test the scientific ground of this assumption by investigating the consequences of acquiring new concepts using a method in which two languages are mixed as compared to a purely monolingual method. Native balanced bilingual speakers of Basque and Spanish-adults (Experiment 1) and children (Experiment 2)-learnt new concepts by associating two different features to novel objects. Half of the participants completed the learning process in a multilingual context (one feature was described in Basque and the other one in Spanish); while the other half completed the learning phase in a purely monolingual context (both features were described in Spanish). Different measures of learning were taken, as well as direct and indirect indicators of concept consolidation. We found no evidence in favor of the non-mixing method when comparing the results of two groups in either experiment, and thus failed to give scientific support for the educational premise of the one subject-one language rule.

  13. Mixing Languages during Learning? Testing the One Subject—One Language Rule

    PubMed Central

    2015-01-01

    In bilingual communities, mixing languages is avoided in formal schooling: even if two languages are used on a daily basis for teaching, only one language is used to teach each given academic subject. This tenet known as the one subject-one language rule avoids mixing languages in formal schooling because it may hinder learning. The aim of this study was to test the scientific ground of this assumption by investigating the consequences of acquiring new concepts using a method in which two languages are mixed as compared to a purely monolingual method. Native balanced bilingual speakers of Basque and Spanish—adults (Experiment 1) and children (Experiment 2)—learnt new concepts by associating two different features to novel objects. Half of the participants completed the learning process in a multilingual context (one feature was described in Basque and the other one in Spanish); while the other half completed the learning phase in a purely monolingual context (both features were described in Spanish). Different measures of learning were taken, as well as direct and indirect indicators of concept consolidation. We found no evidence in favor of the non-mixing method when comparing the results of two groups in either experiment, and thus failed to give scientific support for the educational premise of the one subject—one language rule. PMID:26107624

  14. Shedding Light on Words and Sentences: Near-Infrared Spectroscopy in Language Research

    ERIC Educational Resources Information Center

    Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell; Obrig, Hellmuth

    2012-01-01

    Investigating the neuronal network underlying language processing may contribute to a better understanding of how the brain masters this complex cognitive function with surprising ease and how language is acquired at a fast pace in infancy. Modern neuroimaging methods permit to visualize the evolvement and the function of the language network. The…

  15. The development of perceptual attention and articulatory skill in one or two languages

    NASA Astrophysics Data System (ADS)

    Fowler, Carol; Best, Catherine

    2002-05-01

    Infants acquire properties of their native language especially during the second half of the first year of life. Models, such as Jusczyk's WRAPSA, Best's PAM, Kuhl's NLM, and Werker's account describe changes in perceptual or attentional space that may underlie the perceptual changes that infants exhibit. Unknown is the relation of these changes to changes in speechlike vocalizations that occur at the same time. Future research should address whether the perceptual models predict production learning. Other issues concern how the perceptual and articulatory systems develop for infants exposed to more than one language. Do multiple perceptual spaces develop, or does one space accommodate both languages? For infants exposed to just one language, but living in an environment where the ambient and pedagogical language is different (say, infants in a monolingual Spanish home in the U.S.), early language learning fosters learning the native language, but it may impede learning the ambient language. How much or how little does early exposure to the ambient language allow development of perceptual and articulatory systems for the ambient language? A final issue addresses whether the emergence of lexical, morphological and/or syntactic abilities in the second year is related to further changes in speech perception and production. [Work supported by NICHD.

  16. Rhythm in language acquisition.

    PubMed

    Langus, Alan; Mehler, Jacques; Nespor, Marina

    2017-10-01

    Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Experience-Related Structural Changes of Degenerated Occipital White Matter in Late-Blind Humans – A Diffusion Tensor Imaging Study

    PubMed Central

    Dietrich, Susanne; Hertrich, Ingo; Kumar, Vinod; Ackermann, Hermann

    2015-01-01

    Late-blind humans can learn to understand speech at ultra-fast syllable rates (ca. 20 syllables/s), a capability associated with hemodynamic activation of the central-visual system. Thus, the observed functional cross-modal recruitment of occipital cortex might facilitate ultra-fast speech processing in these individuals. To further elucidate the structural prerequisites of this skill, diffusion tensor imaging (DTI) was conducted in late-blind subjects differing in their capability of understanding ultra-fast speech. Fractional anisotropy (FA) was determined as a quantitative measure of the directionality of water diffusion, indicating fiber tract characteristics that might be influenced by blindness as well as the acquired perceptual skills. Analysis of the diffusion images revealed reduced FA in late-blind individuals relative to sighted controls at the level of the optic radiations at either side and the right-hemisphere dorsal thalamus (pulvinar). Moreover, late-blind subjects showed significant positive correlations between FA and the capacity of ultra-fast speech comprehension within right-hemisphere optic radiation and thalamus. Thus, experience-related structural alterations occurred in late-blind individuals within visual pathways that, presumably, are linked to higher order frontal language areas. PMID:25830371

  18. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    NASA Astrophysics Data System (ADS)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  19. Do grammatical-gender distinctions learned in the second language influence native-language lexical processing?

    PubMed Central

    Kaushanskaya, Margarita; Smith, Samantha

    2015-01-01

    How does learning a second language influence native language processing? In the present study, we examined whether knowledge of Spanish – a language that marks grammatical gender on inanimate nouns – influences lexical processing in English – a language that does not mark grammatical gender. We tested three groups of adult English native speakers: monolinguals, emergent bilinguals with high exposure to Spanish, and emergent bilinguals with low exposure to Spanish. Participants engaged in an associative learning task in English where they learned to associate names of inanimate objects with proper names. For half of the pairs, the grammatical gender of the noun’s Spanish translation matched the gender of the proper name (e.g., corn-Patrick). For half of the pairs, the grammatical gender of the noun’s Spanish translation mismatched the gender of the proper noun (e.g., beach-William). High-Spanish-exposure bilinguals (but not monolinguals or low-Spanish-exposure bilinguals) were less accurate at retrieving proper names for gender-incongruent than for gender-congruent pairs. This indicates that second-language morphosyntactic information is activated during native-language processing, even when the second language is acquired later in life. PMID:26977134

  20. Child first language and adult second language are both tied to general-purpose learning systems.

    PubMed

    Hamrick, Phillip; Lum, Jarrad A G; Ullman, Michael T

    2018-02-13

    Do the mechanisms underlying language in fact serve general-purpose functions that preexist this uniquely human capacity? To address this contentious and empirically challenging issue, we systematically tested the predictions of a well-studied neurocognitive theory of language motivated by evolutionary principles. Multiple metaanalyses were performed to examine predicted links between language and two general-purpose learning systems, declarative and procedural memory. The results tied lexical abilities to learning only in declarative memory, while grammar was linked to learning in both systems in both child first language and adult second language, in specific ways. In second language learners, grammar was associated with only declarative memory at lower language experience, but with only procedural memory at higher experience. The findings yielded large effect sizes and held consistently across languages, language families, linguistic structures, and tasks, underscoring their reliability and validity. The results, which met the predicted pattern, provide comprehensive evidence that language is tied to general-purpose systems both in children acquiring their native language and adults learning an additional language. Crucially, if language learning relies on these systems, then our extensive knowledge of the systems from animal and human studies may also apply to this domain, leading to predictions that might be unwarranted in the more circumscribed study of language. Thus, by demonstrating a role for these systems in language, the findings simultaneously lay a foundation for potentially important advances in the study of this critical domain.