Sample records for youth spoken word

  1. "A Unified Poet Alliance": The Personal and Social Outcomes of Youth Spoken Word Poetry Programming

    ERIC Educational Resources Information Center

    Weinstein, Susan

    2010-01-01

    This article places youth spoken word (YSW) poetry programming within the larger framework of arts education. Drawing primarily on transcripts of interviews with teen poets and adult teaching artists and program administrators, the article identifies specific benefits that participants ascribe to youth spoken word, including the development of…

  2. Call and Responsibility: Critical Questions for Youth Spoken Word Poetry

    ERIC Educational Resources Information Center

    Weinstein, Susan; West, Anna

    2012-01-01

    In this article, Susan Weinstein and Anna West embark on a critical analysis of the maturing field of youth spoken word poetry (YSW). Through a blend of firsthand experience, analysis of YSW-related films and television, and interview data from six years of research, the authors identify specific dynamics that challenge young poets as they…

  3. Famous talker effects in spoken word recognition.

    PubMed

    Maibauer, Alisa M; Markis, Teresa A; Newell, Jessica; McLennan, Conor T

    2014-01-01

    Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.

  4. A Spoken Word Count.

    ERIC Educational Resources Information Center

    Jones, Lyle V.; Wepman, Joseph M.

    This word count is a composite listing of the different words spoken by a selected sample of 54 English-speaking adults and the frequency with which each of the different words was used in a particular test. The stimulus situation was identical for each subject and consisted of 20 cards of the Thematic Apperception Test. Although most word counts…

  5. Influences of spoken word planning on speech recognition.

    PubMed

    Roelofs, Ardi; Ozdemir, Rebecca; Levelt, Willem J M

    2007-09-01

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. 2007 APA

  6. The Academic Spoken Word List

    ERIC Educational Resources Information Center

    Dang, Thi Ngoc Yen; Coxhead, Averil; Webb, Stuart

    2017-01-01

    The linguistic features of academic spoken English are different from those of academic written English. Therefore, for this study, an Academic Spoken Word List (ASWL) was developed and validated to help second language (L2) learners enhance their comprehension of academic speech in English-medium universities. The ASWL contains 1,741 word…

  7. Recognizing Spoken Words: The Neighborhood Activation Model

    PubMed Central

    Luce, Paul A.; Pisoni, David B.

    2012-01-01

    Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in

  8. Orthographic effects in spoken word recognition: Evidence from Chinese.

    PubMed

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  9. Spoken words can make the invisible visible-Testing the involvement of low-level visual representations in spoken word processing.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-03-01

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Rapid modulation of spoken word recognition by visual primes.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  11. Rapid modulation of spoken word recognition by visual primes

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2015-01-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296

  12. Engaging Minority Youth in Diabetes Prevention Efforts Through a Participatory, Spoken-Word Social Marketing Campaign.

    PubMed

    Rogers, Elizabeth A; Fine, Sarah C; Handley, Margaret A; Davis, Hodari B; Kass, James; Schillinger, Dean

    2017-07-01

    To examine the reach, efficacy, and adoption of The Bigger Picture, a type 2 diabetes (T2DM) social marketing campaign that uses spoken-word public service announcements (PSAs) to teach youth about socioenvironmental conditions influencing T2DM risk. A nonexperimental pilot dissemination evaluation through high school assemblies and a Web-based platform were used. The study took place in San Francisco Bay Area high schools during 2013. In the study, 885 students were sampled from 13 high schools. A 1-hour assembly provided data, poet performances, video PSAs, and Web-based platform information. A Web-based platform featured the campaign Web site and social media. Student surveys preassembly and postassembly (knowledge, attitudes), assembly observations, school demographics, counts of Web-based utilization, and adoption were measured. Descriptive statistics, McNemar's χ 2 test, and mixed modeling accounting for clustering were used to analyze data. The campaign included 23 youth poet-created PSAs. It reached >2400 students (93% self-identified non-white) through school assemblies and has garnered >1,000,000 views of Web-based video PSAs. School participants demonstrated increased short-term knowledge of T2DM as preventable, with risk driven by socioenvironmental factors (34% preassembly identified environmental causes as influencing T2DM risk compared to 83% postassembly), and perceived greater personal salience of T2DM risk reduction (p < .001 for all). The campaign has been adopted by regional public health departments. The Bigger Picture campaign showed its potential for reaching and engaging diverse youth. Campaign messaging is being adopted by stakeholders.

  13. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  14. The Activation of Embedded Words in Spoken Word Identification Is Robust but Constrained: Evidence from the Picture-Word Interference Paradigm

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Davis, Colin J.; Mattys, Sven L.; Damian, Markus F.; Hanley, Derek

    2009-01-01

    Three picture-word interference (PWI) experiments assessed the extent to which embedded subset words are activated during the identification of spoken superset words (e.g., "bone" in "trombone"). Participants named aloud pictures (e.g., "brain") while spoken distractors were presented. In the critical condition,…

  15. Instructional Benefits of Spoken Words: A Review of Cognitive Load Factors

    ERIC Educational Resources Information Center

    Kalyuga, Slava

    2012-01-01

    Spoken words have always been an important component of traditional instruction. With the development of modern educational technology tools, spoken text more often replaces or supplements written or on-screen textual representations. However, there could be a cognitive load cost involved in this trend, as spoken words can have both benefits and…

  16. Learning and Consolidation of Novel Spoken Words

    ERIC Educational Resources Information Center

    Davis, Matthew H.; Di Betta, Anna Maria; Macdonald, Mark J. E.; Gaskell, Gareth

    2009-01-01

    Two experiments explored the neural mechanisms underlying the learning and consolidation of novel spoken words. In Experiment 1, participants learned two sets of novel words on successive days. A subsequent recognition test revealed high levels of familiarity for both sets. However, a lexical decision task showed that only novel words learned on…

  17. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  18. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  19. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  20. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    PubMed

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  1. Scaling laws and model of words organization in spoken and written language

    NASA Astrophysics Data System (ADS)

    Bian, Chunhua; Lin, Ruokuang; Zhang, Xiaoyu; Ma, Qianli D. Y.; Ivanov, Plamen Ch.

    2016-01-01

    A broad range of complex physical and biological systems exhibits scaling laws. The human language is a complex system of words organization. Studies of written texts have revealed intriguing scaling laws that characterize the frequency of words occurrence, rank of words, and growth in the number of distinct words with text length. While studies have predominantly focused on the language system in its written form, such as books, little attention is given to the structure of spoken language. Here we investigate a database of spoken language transcripts and written texts, and we uncover that words organization in both spoken language and written texts exhibits scaling laws, although with different crossover regimes and scaling exponents. We propose a model that provides insight into words organization in spoken language and written texts, and successfully accounts for all scaling laws empirically observed in both language forms.

  2. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.

  3. Talker and accent variability effects on spoken word recognition

    NASA Astrophysics Data System (ADS)

    Nyang, Edna E.; Rogers, Catherine L.; Nishi, Kanae

    2003-04-01

    A number of studies have shown that words in a list are recognized less accurately in noise and with longer response latencies when they are spoken by multiple talkers, rather than a single talker. These results have been interpreted as support for an exemplar-based model of speech perception, in which it is assumed that detailed information regarding the speaker's voice is preserved in memory and used in recognition, rather than being eliminated via normalization. In the present study, the effects of varying both accent and talker are investigated using lists of words spoken by (a) a single native English speaker, (b) six native English speakers, (c) three native English speakers and three Japanese-accented English speakers. Twelve /hVd/ words were mixed with multi-speaker babble at three signal-to-noise ratios (+10, +5, and 0 dB) to create the word lists. Native English-speaking listeners' percent-correct recognition for words produced by native English speakers across the three talker conditions (single talker native, multi-talker native, and multi-talker mixed native and non-native) and three signal-to-noise ratios will be compared to determine whether sources of speaker variability other than voice alone add to the processing demands imposed by simple (i.e., single accent) speaker variability in spoken word recognition.

  4. The time course of morphological processing during spoken word recognition in Chinese.

    PubMed

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  5. Spoken Word Recognition of Chinese Words in Continuous Speech

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  6. Attempting Arts Integration: Secondary Teachers' Experiences with Spoken Word Poetry

    ERIC Educational Resources Information Center

    Williams, Wendy R.

    2018-01-01

    Spoken word poetry is an art form that involves poetry writing and performance. Past research on spoken word has described the benefits for poets and looked at its use in pre-service teacher education; however, research is needed to understand how to assist in-service teachers in using this art form. During the 2016-2017 school year, 15 teachers…

  7. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    ERIC Educational Resources Information Center

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  8. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    PubMed

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  9. An event-related potential study of memory for words spoken aloud or heard.

    PubMed

    Wilding, E L; Rugg, M D

    1997-09-01

    Subjects made old/new recognition judgements to visually presented words, half of which had been encountered in a prior study phase. For each word judged old, subjects made a subsequent source judgement, indicating whether they had pronounced the word aloud at study (spoken words), or whether they had heard the word spoken to them (heard words). Event-related potentials (ERPs) were compared for three classes of test item; words correctly judged to be new (correct rejections), and spoken and heard words that were correctly assigned to source (spoken hit/hit and heard hit/hit response categories). Consistent with previous findings (Wilding, E. L. and Rugg, M. D., Brain, 1996, 119, 889-905), two temporally and topographically dissociable components, with parietal and frontal maxima respectively, differentiated the ERPs to the hit/hit and correct rejection response categories. In addition, there was some evidence that the frontally distributed component could be decomposed into two distinct components, only one of which differentiated the two classes of hit/hit ERPs. The findings suggest that at least three functionally and neurologically dissociable processes can contribute to successful recovery of source information.

  10. Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.

    PubMed

    Hunter, Cynthia R; Pisoni, David B

    Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low

  11. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  12. Immediate effects of anticipatory coarticulation in spoken-word recognition

    PubMed Central

    Salverda, Anne Pier; Kleinschmidt, Dave; Tanenhaus, Michael K.

    2014-01-01

    Two visual-world experiments examined listeners’ use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as “The … ladder is the target”. With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200–240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (“ The ladder … is the target”). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article’s vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for “data explanation” approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. PMID:24511179

  13. "Poetry Does Really Educate": An Interview with Spoken Word Poet Luka Lesson

    ERIC Educational Resources Information Center

    Xerri, Daniel

    2016-01-01

    Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…

  14. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    ERIC Educational Resources Information Center

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  15. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    PubMed

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  16. A Word by Any Other Intonation: FMRI Evidence for Implicit Memory Traces for Pitch Contours of Spoken Words in Adult Brains

    PubMed Central

    Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi

    2013-01-01

    Objectives Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Experimental design Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. Principal findings The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Conclusions Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words. PMID:24391713

  17. The acceleration of spoken-word processing in children's native-language acquisition: an ERP cohort study.

    PubMed

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hagiwara, Hiroko

    2011-04-01

    Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children's native-language acquisition. In a picture-word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n=80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children's onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Sulpizio, Simone; McQueen, James M.

    2012-01-01

    In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…

  19. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    ERIC Educational Resources Information Center

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  20. A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    PubMed

    Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi

    2013-01-01

    Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.

  1. Spoken Word Recognition in Toddlers Who Use Cochlear Implants

    PubMed Central

    Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.

    2010-01-01

    Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921

  2. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    ERIC Educational Resources Information Center

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  3. English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

    PubMed Central

    Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135

  4. Infant perceptual development for faces and spoken words: An integrated approach

    PubMed Central

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-01-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626

  5. Orthographic Facilitation Effects on Spoken Word Production: Evidence from Chinese

    ERIC Educational Resources Information Center

    Zhang, Qingfang; Weekes, Brendan Stuart

    2009-01-01

    The aim of this experiment was to investigate the time course of orthographic facilitation on picture naming in Chinese. We used a picture-word paradigm to investigate orthographic and phonological facilitation on monosyllabic spoken word production in native Mandarin speakers. Both the stimulus-onset asynchrony (SOA) and the picture-word…

  6. The time course of spoken word learning and recognition: studies with artificial lexicons.

    PubMed

    Magnuson, James S; Tanenhaus, Michael K; Aslin, Richard N; Dahan, Delphine

    2003-06-01

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.

  7. Lexical Competition in Non-Native Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Weber, Andrea; Cutler, Anne

    2004-01-01

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name ("pencil," given target "panda") than on less confusable distractors…

  8. A Spoken Word Count (Children--Ages 5, 6 and 7).

    ERIC Educational Resources Information Center

    Wepman, Joseph M.; Hass, Wilbur

    Relatively little research has been done on the quantitative characteristics of children's word usage. This spoken count was undertaken to investigate those aspects of word usage and frequency which could cast light on lexical processes in grammar and verbal development in children. Three groups of 30 children each (boys and girls) from…

  9. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  10. Modeling Spoken Word Recognition Performance by Pediatric Cochlear Implant Users using Feature Identification

    PubMed Central

    Frisch, Stefan A.; Pisoni, David B.

    2012-01-01

    Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with

  11. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    PubMed

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  12. Reading Spoken Words: Orthographic Effects in Auditory Priming

    ERIC Educational Resources Information Center

    Chereau, Celine; Gaskell, M. Gareth; Dumay, Nicolas

    2007-01-01

    Three experiments examined the involvement of orthography in spoken word processing using a task--unimodal auditory priming with offset overlap--taken to reflect activation of prelexical representations. Two types of prime-target relationship were compared; both involved phonological overlap, but only one had a strong orthographic overlap (e.g.,…

  13. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    PubMed

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Reconsidering the role of temporal order in spoken word recognition.

    PubMed

    Toscano, Joseph C; Anderson, Nathaniel D; McMurray, Bob

    2013-10-01

    Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.

  15. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    PubMed Central

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  16. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    PubMed

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  17. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    PubMed

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  18. Individual Differences in Inhibitory Control Relate to Bilingual Spoken Word Processing

    ERIC Educational Resources Information Center

    Mercier, Julie; Pivneva, Irina; Titone, Debra

    2014-01-01

    We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., "field") and looked at pictures corresponding to the target, a within-language competitor…

  19. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    PubMed

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. The effect of background noise on the word activation process in nonnative spoken-word recognition.

    PubMed

    Scharenborg, Odette; Coumans, Juul M J; van Hout, Roeland

    2018-02-01

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise. The native listeners outperformed the nonnative listeners in all listening conditions. Importantly, however, the effect of noise on the multiple activation process was found to be remarkably similar in native and nonnative listening. The presence of noise increased the set of candidate words considered for recognition in both native and nonnative listening. The results indicate that the observed performance differences between the English and Dutch listeners should not be primarily attributed to a differential effect of noise, but rather to the difference between native and nonnative listening. Additional analyses showed that word-initial information was found to be more important than word-final information during spoken-word recognition. When word-initial information was no longer reliably available word recognition accuracy dropped and word frequency information could no longer be used suggesting that word frequency information is strongly tied to the onset of words and the earliest moments of lexical access. Proficiency and inhibition ability were found to influence nonnative spoken-word recognition in noise, with a higher proficiency in the nonnative language and worse inhibition ability leading to improved recognition performance. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Learning and consolidation of new spoken words in autism spectrum disorder.

    PubMed

    Henderson, Lisa; Powell, Anna; Gareth Gaskell, M; Norbury, Courtenay

    2014-11-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words and/or integrating them with existing knowledge. Nineteen boys with ASD and 19 typically developing (TD) boys matched on age and vocabulary knowledge showed similar improvements in recognition and recall of novel words (e.g. 'biscal') 24 hours after training, suggesting an intact ability to consolidate explicit knowledge of new spoken word forms. TD children showed competition effects for existing neighbors (e.g. 'biscuit') after 24 hours, suggesting that the new words had been integrated with existing knowledge over time. In contrast, children with ASD showed immediate competition effects that were not significant after 24 hours, suggesting a qualitative difference in the time course of lexical integration. These results are considered from the perspective of the dual-memory systems framework. © 2014 John Wiley & Sons Ltd.

  2. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    PubMed

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  3. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    ERIC Educational Resources Information Center

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  4. Conducting spoken word recognition research online: Validation and a new timing method.

    PubMed

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  5. Development of Lexical-Semantic Language System: N400 Priming Effect for Spoken Words in 18- and 24-Month Old Children

    ERIC Educational Resources Information Center

    Rama, Pia; Sirri, Louah; Serres, Josette

    2013-01-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…

  6. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    PubMed

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Phonological Neighborhood Effects in Spoken Word Production: An fMRI Study

    ERIC Educational Resources Information Center

    Peramunage, Dasun; Blumstein, Sheila E.; Myers, Emily B.; Goldrick, Matthew; Baese-Berk, Melissa

    2011-01-01

    The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the…

  8. Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?

    ERIC Educational Resources Information Center

    Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.

    2013-01-01

    Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…

  9. Context and Spoken Word Recognition in a Novel Lexicon

    ERIC Educational Resources Information Center

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2008-01-01

    Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments,…

  10. Visual Speech Primes Open-Set Recognition of Spoken Words

    ERIC Educational Resources Information Center

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2009-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…

  11. A preliminary study of subjective frequency estimates of words spoken in Cantonese.

    PubMed

    Yip, M C

    2001-06-01

    A database is presented of the subjective frequency estimates for a set of 30 Chinese homophones. The estimates are based on analysis of responses from a simple listening task by 120 University students. On the listening task, they are asked to mention the first meaning thought of upon hearing a Chinese homophone by writing down the corresponding Chinese characters. There was correlation of .66 between the frequency of spoken and written words, suggesting distributional information about the lexical representations is generally independent of modality. These subjective frequency counts should be useful in the construction of material sets for research on word recognition using spoken Chinese (Cantonese).

  12. Differential Processing of Thematic and Categorical Conceptual Relations in Spoken Word Production

    ERIC Educational Resources Information Center

    de Zubicaray, Greig I.; Hansen, Samuel; McMahon, Katie L.

    2013-01-01

    Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference…

  13. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    PubMed

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  14. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    PubMed Central

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2012-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836

  15. Probabilistic Phonotactics as a Cue for Recognizing Spoken Cantonese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2017-01-01

    Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using…

  16. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  17. The influence of talker and foreign-accent variability on spoken word identification.

    PubMed

    Bent, Tessa; Holt, Rachael Frush

    2013-03-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.

  18. Spectrotemporal processing drives fast access to memory traces for spoken words.

    PubMed

    Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C

    2012-05-01

    The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Hemispheric Differences in Indexical Specificity Effects in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Gonzalez, Julio; McLennan, Conor T.

    2007-01-01

    Variability in talker identity, one type of indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. Furthermore, neuropsychological evidence suggests that indexical and linguistic information may be represented and processed differently in the 2 cerebral hemispheres, and is consistent with findings from…

  20. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    PubMed

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  1. The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2013-01-01

    The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.

  2. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    ERIC Educational Resources Information Center

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  3. Phonological Competition within the Word: Evidence from the Phoneme Similarity Effect in Spoken Production

    ERIC Educational Resources Information Center

    Cohen-Goldberg, Ariel M.

    2012-01-01

    Theories of spoken production have not specifically addressed whether the phonemes of a word compete with each other for selection during phonological encoding (e.g., whether /t/ competes with /k/ in cat). Spoken production theories were evaluated and found to fall into three classes, theories positing (1) no competition, (2) competition among…

  4. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  5. Speech perception and spoken word recognition: past and present.

    PubMed

    Jusezyk, Peter W; Luce, Paul A

    2002-02-01

    The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.

  6. Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing

    ERIC Educational Resources Information Center

    Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.

    2016-01-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…

  7. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    PubMed Central

    Hirschmüller, Sarah; Egloff, Boris

    2016-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135

  8. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  9. The Impact of Orthographic Consistency on German Spoken Word Identification

    ERIC Educational Resources Information Center

    Beyermann, Sandra; Penke, Martina

    2014-01-01

    An auditory lexical decision experiment was conducted to find out whether sound-to-spelling consistency has an impact on German spoken word processing, and whether such an impact is different at different stages of reading development. Four groups of readers (school children in the second, third and fifth grades, and university students)…

  10. Relationships among vocabulary size, nonverbal cognition, and spoken word recognition in adults with cochlear implants

    NASA Astrophysics Data System (ADS)

    Collison, Elizabeth A.; Munson, Benjamin; Carney, Arlene E.

    2002-05-01

    Recent research has attempted to identify the factors that predict speech perception performance among users of cochlear implants (CIs). Studies have found that approximately 20%-60% of the variance in speech perception scores can be accounted for by factors including duration of deafness, etiology, type of device, and length of implant use, leaving approximately 50% of the variance unaccounted for. The current study examines the extent to which vocabulary size and nonverbal cognitive ability predict CI listeners' spoken word recognition. Fifteen postlingually deafened adults with nucleus or clarion CIs were given standardized assessments of nonverbal cognitive ability and expressive vocabulary size: the Expressive Vocabulary Test, the Test of Nonverbal Intelligence-III, and the Woodcock-Johnson-III Test of Cognitive Ability, Verbal Comprehension subtest. Two spoken word recognition tasks were administered. In the first, listeners identified isophonemic CVC words. In the second, listeners identified gated words varying in lexical frequency and neighborhood density. Analyses will examine the influence of lexical frequency and neighborhood density on the uniqueness point in the gating task, as well as relationships among nonverbal cognitive ability, vocabulary size, and the two spoken word recognition measures. [Work supported by NIH Grant P01 DC00110 and by the Lions 3M Hearing Foundation.

  11. How Are Pronunciation Variants of Spoken Words Recognized? A Test of Generalization to Newly Learned Words

    ERIC Educational Resources Information Center

    Pitt, Mark A.

    2009-01-01

    One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments [Gaskell, G., & Marslen-Wilson, W. D. (1998). Mechanisms of phonological inference in speech perception.…

  12. The socially weighted encoding of spoken words: a dual-route approach to speech perception.

    PubMed

    Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B

    2013-01-01

    Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  13. Effects of lexical competition on immediate memory span for spoken words.

    PubMed

    Goh, Winston D; Pisoni, David B

    2003-08-01

    Current theories and models of the structural organization of verbal short-term memory are primarily based on evidence obtained from manipulations of features inherent in the short-term traces of the presented stimuli, such as phonological similarity. In the present study, we investigated whether properties of the stimuli that are not inherent in the short-term traces of spoken words would affect performance in an immediate memory span task. We studied the lexical neighbourhood properties of the stimulus items, which are based on the structure and organization of words in the mental lexicon. The experiments manipulated lexical competition by varying the phonological neighbourhood structure (i.e., neighbourhood density and neighbourhood frequency) of the words on a test list while controlling for word frequency and intra-set phonological similarity (family size). Immediate memory span for spoken words was measured under repeated and nonrepeated sampling procedures. The results demonstrated that lexical competition only emerged when a nonrepeated sampling procedure was used and the participants had to access new words from their lexicons. These findings were not dependent on individual differences in short-term memory capacity. Additional results showed that the lexical competition effects did not interact with proactive interference. Analyses of error patterns indicated that item-type errors, but not positional errors, were influenced by the lexical attributes of the stimulus items. These results complement and extend previous findings that have argued for separate contributions of long-term knowledge and short-term memory rehearsal processes in immediate verbal serial recall tasks.

  14. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    ERIC Educational Resources Information Center

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  15. Differential processing of thematic and categorical conceptual relations in spoken word production.

    PubMed

    de Zubicaray, Greig I; Hansen, Samuel; McMahon, Katie L

    2013-02-01

    Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference paradigm and functional magnetic resonance imaging (fMRI), we manipulated categorical (COW-rat) and thematic (COW-pasture) TARGET-distractor relations in a balanced design, finding interference and facilitation effects on naming latencies, respectively, as well as differential patterns of brain activation compared with an unrelated distractor condition. While both types of distractor relation activated the middle portion of the left middle temporal gyrus (MTG) consistent with retrieval of conceptual or lexical representations, categorical relations involved additional activation of posterior left MTG, consistent with retrieval of a lexical cohort. Thematic relations involved additional activation of the left angular gyrus. These results converge with recent lesion evidence implicating the left inferior parietal lobe in processing thematic relations and may indicate a potential role for this region during spoken word production. 2013 APA, all rights reserved

  16. Learning and Consolidation of New Spoken Words in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Henderson, Lisa; Powell, Anna; Gaskell, M. Gareth; Norbury, Courtenay

    2014-01-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words…

  17. Deaf Children With Cochlear Implants Do Not Appear to Use Sentence Context to Help Recognize Spoken Words

    PubMed Central

    Conway, Christopher M.; Deocampo, Joanne A.; Walk, Anne M.; Anaya, Esperanza M.; Pisoni, David B.

    2015-01-01

    Purpose The authors investigated the ability of deaf children with cochlear implants (CIs) to use sentence context to facilitate the perception of spoken words. Method Deaf children with CIs (n = 24) and an age-matched group of children with normal hearing (n = 31) were presented with lexically controlled sentences and were asked to repeat each sentence in its entirety. Performance was analyzed at each of 3 word positions of each sentence (first, second, and third key word). Results Whereas the children with normal hearing showed robust effects of contextual facilitation—improved speech perception for the final words in a sentence—the deaf children with CIs on average showed no such facilitation. Regression analyses indicated that for the deaf children with CIs, Forward Digit Span scores significantly predicted accuracy scores for all 3 positions, whereas performance on the Stroop Color and Word Test, Children’s Version (Golden, Freshwater, & Golden, 2003) predicted how much contextual facilitation was observed at the final word. Conclusions The pattern of results suggests that some deaf children with CIs do not use sentence context to improve spoken word recognition. The inability to use sentence context may be due to possible interactions between language experience and cognitive factors that affect the ability to successfully integrate temporal–sequential information in spoken language. PMID:25029170

  18. A cascaded neuro-computational model for spoken word recognition

    NASA Astrophysics Data System (ADS)

    Hoya, Tetsuya; van Leeuwen, Cees

    2010-03-01

    In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.

  19. Cross-modal representation of spoken and written word meaning in left pars triangularis.

    PubMed

    Liuzzi, Antonietta Gabriella; Bruffaerts, Rose; Peeters, Ronald; Adamczuk, Katarzyna; Keuleers, Emmanuel; De Deyne, Simon; Storms, Gerrit; Dupont, Patrick; Vandenberghe, Rik

    2017-04-15

    The correspondence in meaning extracted from written versus spoken input remains to be fully understood neurobiologically. Here, in a total of 38 subjects, the functional anatomy of cross-modal semantic similarity for concrete words was determined based on a dual criterion: First, a voxelwise univariate analysis had to show significant activation during a semantic task (property verification) performed with written and spoken concrete words compared to the perceptually matched control condition. Second, in an independent dataset, in these clusters, the similarity in fMRI response pattern to two distinct entities, one presented as a written and the other as a spoken word, had to correlate with the similarity in meaning between these entities. The left ventral occipitotemporal transition zone and ventromedial temporal cortex, retrosplenial cortex, pars orbitalis bilaterally, and the left pars triangularis were all activated in the univariate contrast. Only the left pars triangularis showed a cross-modal semantic similarity effect. There was no effect of phonological nor orthographic similarity in this region. The cross-modal semantic similarity effect was confirmed by a secondary analysis in the cytoarchitectonically defined BA45. A semantic similarity effect was also present in the ventral occipital regions but only within the visual modality, and in the anterior superior temporal cortex only within the auditory modality. This study provides direct evidence for the coding of word meaning in BA45 and positions its contribution to semantic processing at the confluence of input-modality specific pathways that code for meaning within the respective input modalities. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2011-10-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.

  1. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans

    NASA Astrophysics Data System (ADS)

    Pei, Xiaomei; Barbour, Dennis L.; Leuthardt, Eric C.; Schalk, Gerwin

    2011-08-01

    Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.

  2. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    PubMed

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Eye movements during spoken word recognition in Russian children.

    PubMed

    Sekerina, Irina A; Brooks, Patricia J

    2007-09-01

    This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing.

  4. The gender congruency effect during bilingual spoken-word recognition

    PubMed Central

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  5. The Interaction of Lexical Semantics and Cohort Competition in Spoken Word Recognition: An fMRI Study

    ERIC Educational Resources Information Center

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.

    2011-01-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…

  6. Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence.

    PubMed

    Schomers, Malte R; Kirilina, Evgeniya; Weigand, Anne; Bajbouj, Malek; Pulvermüller, Friedemann

    2015-10-01

    Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., "pool" or "tool"). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed "tool" relative to "pool" responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition. © The Author 2014. Published by Oxford University Press.

  7. Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence

    PubMed Central

    Schomers, Malte R.; Kirilina, Evgeniya; Weigand, Anne; Bajbouj, Malek; Pulvermüller, Friedemann

    2015-01-01

    Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., “pool” or “tool”). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed “tool” relative to “pool” responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition. PMID:25452575

  8. Cross-modal metaphorical mapping of spoken emotion words onto vertical space.

    PubMed

    Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.

  9. Cross-modal metaphorical mapping of spoken emotion words onto vertical space

    PubMed Central

    Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007

  10. The Roles of Tonal and Segmental Information in Mandarin Spoken Word Recognition: An Eyetracking Study

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2010-01-01

    We used eyetracking to examine how tonal versus segmental information influence spoken word recognition in Mandarin Chinese. Participants heard an auditory word and were required to identify its corresponding picture from an array that included the target item ("chuang2" "bed"), a phonological competitor (segmental: chuang1 "window"; cohort:…

  11. Neural Signatures of Language Co-activation and Control in Bilingual Spoken Word Comprehension

    PubMed Central

    Chen, Peiyao; Bobb, Susan C.; Hoshino, Noriko; Marian, Viorica

    2017-01-01

    To examine the neural signatures of language co-activation and control during bilingual spoken word comprehension, Korean-English bilinguals and English monolinguals were asked to make overt or covert semantic relatedness judgments on auditorily-presented English word pairs. In two critical conditions, participants heard word pairs consisting of an English-Korean interlingual homophone (e.g., the sound /mu:n/ means “moon” in English and “door” in Korean) as the prime and an English word as the target. In the homophone-related condition, the target (e.g., “lock”) was related to the homophone’s Korean meaning, but not related to the homophone’s English meaning. In the homophone-unrelated condition, the target was unrelated to either the homophone’s Korean meaning or the homophone’s English meaning. In overtly responded situations, ERP results revealed that the reduced N400 effect in bilinguals for homophone-related word pairs correlated positively with the amount of their daily exposure to Korean. In covertly responded situations, ERP results showed a reduced late positive component for homophone-related word pairs in the right hemisphere, and this late positive effect was related to the neural efficiency of suppressing interference in a non-linguistic task. Together, these findings suggest 1) that the degree of language co-activation in bilingual spoken word comprehension is modulated by the amount of daily exposure to the non-target language; and 2) that bilinguals who are less influenced by cross-language activation may also have greater efficiency in suppressing interference in a non-linguistic task. PMID:28372943

  12. Neural signatures of language co-activation and control in bilingual spoken word comprehension.

    PubMed

    Chen, Peiyao; Bobb, Susan C; Hoshino, Noriko; Marian, Viorica

    2017-06-15

    To examine the neural signatures of language co-activation and control during bilingual spoken word comprehension, Korean-English bilinguals and English monolinguals were asked to make overt or covert semantic relatedness judgments on auditorily-presented English word pairs. In two critical conditions, participants heard word pairs consisting of an English-Korean interlingual homophone (e.g., the sound /mu:n/ means "moon" in English and "door" in Korean) as the prime and an English word as the target. In the homophone-related condition, the target (e.g., "lock") was related to the homophone's Korean meaning, but not related to the homophone's English meaning. In the homophone-unrelated condition, the target was unrelated to either the homophone's Korean meaning or the homophone's English meaning. In overtly responded situations, ERP results revealed that the reduced N400 effect in bilinguals for homophone-related word pairs correlated positively with the amount of their daily exposure to Korean. In covertly responded situations, ERP results showed a reduced late positive component for homophone-related word pairs in the right hemisphere, and this late positive effect was related to the neural efficiency of suppressing interference in a non-linguistic task. Together, these findings suggest 1) that the degree of language co-activation in bilingual spoken word comprehension is modulated by the amount of daily exposure to the non-target language; and 2) that bilinguals who are less influenced by cross-language activation may also have greater efficiency in suppressing interference in a non-linguistic task. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Children reading spoken words: interactions between vocabulary and orthographic expectancy.

    PubMed

    Wegener, Signy; Wang, Hua-Chen; de Lissa, Peter; Robidoux, Serje; Nation, Kate; Castles, Anne

    2018-05-01

    There is an established association between children's oral vocabulary and their word reading but its basis is not well understood. Here, we present evidence from eye movements for a novel mechanism underlying this association. Two groups of 18 Grade 4 children received oral vocabulary training on one set of 16 novel words (e.g., 'nesh', 'coib'), but no training on another set. The words were assigned spellings that were either predictable from phonology (e.g., nesh) or unpredictable (e.g., koyb). These were subsequently shown in print, embedded in sentences. Reading times were shorter for orally familiar than unfamiliar items, and for words with predictable than unpredictable spellings but, importantly, there was an interaction between the two: children demonstrated a larger benefit of oral familiarity for predictable than for unpredictable items. These findings indicate that children form initial orthographic expectations about spoken words before first seeing them in print. A video abstract of this article can be viewed at: https://youtu.be/jvpJwpKMM3E. © 2017 John Wiley & Sons Ltd.

  14. Examining the Time Course of Indexical Specificity Effects in Spoken Word Recognition

    ERIC Educational Resources Information Center

    McLennan, Conor T.; Luce, Paul A.

    2005-01-01

    Variability in talker identity and speaking rate, commonly referred to as indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. The present study examines the time course of indexical specificity effects to evaluate the hypothesis that such effects occur relatively late in the perceptual processing of…

  15. Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…

  16. Are Young Children with Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    ERIC Educational Resources Information Center

    Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…

  17. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    ERIC Educational Resources Information Center

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  18. Context Effects and Spoken Word Recognition of Chinese: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Yip, Michael C. W.; Zhai, Mingjun

    2018-01-01

    This study examined the time-course of context effects on spoken word recognition during Chinese sentence processing. We recruited 60 native Mandarin listeners to participate in an eye-tracking experiment. In this eye-tracking experiment, listeners were told to listen to a sentence carefully, which ended with a Chinese homophone, and look at…

  19. Using spoken words to guide open-ended category formation.

    PubMed

    Chauhan, Aneesh; Seabra Lopes, Luís

    2011-11-01

    Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.

  20. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic.

    PubMed

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children's phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children's early morphological awareness in SpA explained variance in children's gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  1. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    PubMed Central

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633

  2. Eye Movements to Pictures Reveal Transient Semantic Activation during Spoken Word Recognition

    ERIC Educational Resources Information Center

    Yee, Eiling; Sedivy, Julie C.

    2006-01-01

    Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an…

  3. An investigation of phonology and orthography in spoken-word recognition.

    PubMed

    Slowiaczek, Louisa M; Soltano, Emily G; Wieting, Shani J; Bishop, Karyn L

    2003-02-01

    The possible influence of initial phonological and/or orthographic information on spoken-word processing was examined in six experiments modelled after and extending the work Jakimik, Cole, and Rudnicky (1985). Following Jakimik et al., Experiment 1 used polysyllabic primes with monosyllabic targets (e.g., BUCKLE-BUCK/[symbol: see text]; MYSTERY-MISS,/[symbol: see text]). Experiments 2, 3, and 4 used polysyllabic primes and polysyllabic targets whose initial syllables shared phonological information (e.g., NUISANCE-NOODLE,/[symbol: see text]), orthographic information (e.g., RATIO-RATIFY,/[symbol: see text]), both (e.g., FUNNEL-FUNNY,/[symbol: see text]), or were unrelated (e.g., SERMON-NOODLE,/[symbol: see text]). Participants engaged in a lexical decision (Experiments 1, 3, and 4) or a shadowing (Experiment 2) task with a single-trial (Experiments 2 and 3) or subsequent-trial (Experiments 1 and 4) priming procedure. Experiment 5 tested primes and targets that varied in the number of shared graphemes while holding shared phonemes constant at one. Experiment 6 used the procedures of Experiment 2 but a low proportion of related trials. Results revealed that response times were facilitated for prime-target pairs that shared initial phonological and orthographic information. These results were confirmed under conditions when strategic processing was greatly reduced suggesting that phonological and orthographic information is automatically activated during spoken-word processing.

  4. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    PubMed Central

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  5. Long-term temporal tracking of speech rate affects spoken-word recognition.

    PubMed

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.

  6. Encoding lexical tones in jTRACE: a simulation of monosyllabic spoken word recognition in Mandarin Chinese.

    PubMed

    Shuai, Lan; Malins, Jeffrey G

    2017-02-01

    Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.

  7. Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint.

    PubMed

    Piai, Vitória; Roelofs, Ardi; Maris, Eric

    2014-01-01

    Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production. © 2013 Published by Elsevier Ltd.

  8. Spoken word recognition by Latino children learning Spanish as their first language*

    PubMed Central

    HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE

    2010-01-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157

  9. Setting the Tone: An ERP Investigation of the Influences of Phonological Similarity on Spoken Word Recognition in Mandarin Chinese

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2012-01-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following…

  10. Some considerations in evaluating spoken word recognition by normal-hearing, noise-masked normal-hearing, and cochlear implant listeners. I: The effects of response format.

    PubMed

    Sommers, M S; Kirk, K I; Pisoni, D B

    1997-04-01

    The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH

  11. Intentional and Reactive Inhibition during Spoken-Word Stroop Task Performance in People with Aphasia

    ERIC Educational Resources Information Center

    Pompon, Rebecca Hunting; McNeil, Malcolm R.; Spencer, Kristie A.; Kendall, Diane L.

    2015-01-01

    Purpose: The integrity of selective attention in people with aphasia (PWA) is currently unknown. Selective attention is essential for everyday communication, and inhibition is an important part of selective attention. This study explored components of inhibition--both intentional and reactive inhibition--during spoken-word production in PWA and in…

  12. Interaction in Spoken Word Recognition Models: Feedback Helps.

    PubMed

    Magnuson, James S; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.

  13. Interaction in Spoken Word Recognition Models: Feedback Helps

    PubMed Central

    Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.

    2018-01-01

    Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593

  14. Non-linear processing of a linear speech stream: The influence of morphological structure on the recognition of spoken Arabic words.

    PubMed

    Gwilliams, L; Marantz, A

    2015-08-01

    Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. The influence of orthographic experience on the development of phonological preparation in spoken word production.

    PubMed

    Li, Chuchu; Wang, Min

    2017-08-01

    Three sets of experiments using the picture naming tasks with the form preparation paradigm investigated the influence of orthographic experience on the development of phonological preparation unit in spoken word production in native Mandarin-speaking children. Participants included kindergarten children who have not received formal literacy instruction, Grade 1 children who are comparatively more exposed to the alphabetic pinyin system and have very limited Chinese character knowledge, Grades 2 and 4 children who have better character knowledge and more exposure to characters, and skilled adult readers who have the most advanced character knowledge and most exposure to characters. Only Grade 1 children showed the form preparation effect in the same initial consonant condition (i.e., when a list of target words shared the initial consonant). Both Grade 4 children and adults showed the preparation effect when the initial syllable (but not tone) among target words was shared. Kindergartners and Grade 2 children only showed the preparation effect when the initial syllable including tonal information was shared. These developmental changes in phonological preparation could be interpreted as a joint function of the modification of phonological representation and attentional shift. Extensive pinyin experience encourages speakers to attend to and select onset phoneme in phonological preparation, whereas extensive character experience encourages speakers to prepare spoken words in syllables.

  16. Interference from related actions in spoken word production: Behavioural and fMRI evidence.

    PubMed

    de Zubicaray, Greig; Fraser, Douglas; Ramajoo, Kori; McMahon, Katie

    2017-02-01

    Few investigations of lexical access in spoken word production have investigated the cognitive and neural mechanisms involved in action naming. These are likely to be more complex than the mechanisms involved in object naming, due to the ways in which conceptual features of action words are represented. The present study employed a blocked cyclic naming paradigm to examine whether related action contexts elicit a semantic interference effect akin to that observed with categorically related objects. Participants named pictures of intransitive actions to avoid a confound with object processing. In Experiment 1, body-part related actions (e.g., running, walking, skating, hopping) were named significantly slower compared to unrelated actions (e.g., laughing, running, waving, hiding). Experiment 2 employed perfusion functional Magnetic Resonance Imaging (fMRI) to investigate the neural mechanisms involved in this semantic interference effect. Compared to unrelated actions, naming related actions elicited significant perfusion signal increases in frontotemporal cortex, including bilateral inferior frontal gyrus (IFG) and hippocampus, and decreases in bilateral posterior temporal, occipital and parietal cortices, including intraparietal sulcus (IPS). The findings demonstrate a role for temporoparietal cortex in conceptual-lexical processing of intransitive action knowledge during spoken word production, and support the proposed involvement of interference resolution and incremental learning mechanisms in the blocked cyclic naming paradigm. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Neural Correlates of Priming Effects in Children during Spoken Word Processing with Orthographic Demands

    ERIC Educational Resources Information Center

    Cao, Fan; Khalid, Kainat; Zaveri, Rishi; Bolger, Donald J.; Bitan, Tali; Booth, James R.

    2010-01-01

    Priming effects were examined in 40 children (9-15 years old) using functional magnetic resonance imaging (fMRI). An orthographic judgment task required participants to determine if two sequentially presented spoken words had the same spelling for the rime. Four lexical conditions were designed: similar orthography and phonology (O[superscript…

  18. A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension

    ERIC Educational Resources Information Center

    Ostarek, Markus; Huettig, Falk

    2017-01-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…

  19. The Role of Grammatical Category Information in Spoken Word Retrieval

    PubMed Central

    Duràn, Carolina Palma; Pillon, Agnesa

    2011-01-01

    We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production. PMID

  20. The role of grammatical category information in spoken word retrieval.

    PubMed

    Duràn, Carolina Palma; Pillon, Agnesa

    2011-01-01

    We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production.

  1. Transnationalism and Rights in the Age of Empire: Spoken Word, Music, and Digital Culture in the Borderlands

    ERIC Educational Resources Information Center

    Hicks, Emily D.

    2004-01-01

    The cultural activities, including the performance of music and spoken word are documented. The cultural activities in the San Diego-Tijuana region that is described is emerged from rhizomatic, transnational points of contact.

  2. Some Computational Analyses of the PBK Test: Effects of Frequency and Lexical Density on Spoken Word Recognition

    PubMed Central

    Meyer, Ted A.; Pisoni, David B.

    2012-01-01

    Objective The Phonetically Balanced Kindergarten (PBK) Test (Haskins, Reference Note 2) has been used for almost 50 yr to assess spoken word recognition performance in children with hearing impairments. The test originally consisted of four lists of 50 words, but only three of the lists (lists 1, 3, and 4) were considered “equivalent” enough to be used clinically with children. Our goal was to determine if the lexical properties of the different PBK lists could explain any differences between the three “equivalent” lists and the fourth PBK list (List 2) that has not been used in clinical testing. Design Word frequency and lexical neighborhood frequency and density measures were obtained from a computerized database for all of the words on the four lists from the PBK Test as well as the words from a single PB-50 (Egan, 1948) word list. Results The words in the “easy” PBK list (List 2) were of higher frequency than the words in the three “equivalent” lists. Moreover, the lexical neighborhoods of the words on the “easy” list contained fewer phonetically similar words than the neighborhoods of the words on the other three “equivalent” lists. Conclusions It is important for researchers to consider word frequency and lexical neighborhood frequency and density when constructing word lists for testing speech perception. The results of this computational analysis of the PBK Test provide additional support for the proposal that spoken words are recognized “relationally” in the context of other phonetically similar words in the lexicon. Implications of using open-set word recognition tests with children with hearing impairments are discussed with regard to the specific vocabulary and information processing demands of the PBK Test. PMID:10466571

  3. Voice tracking and spoken word recognition in the presence of other voices

    NASA Astrophysics Data System (ADS)

    Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar

    2004-12-01

    We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.

  4. The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening

    PubMed Central

    Cibelli, Emily S.; Leonard, Matthew K.; Johnson, Keith; Chang, Edward F.

    2015-01-01

    Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge. PMID:26072003

  5. "Poetry Is Not a Special Club": How Has an Introduction to the Secondary Discourse of Spoken Word Made Poetry a Memorable Learning Experience for Young People?

    ERIC Educational Resources Information Center

    Dymoke, Sue

    2017-01-01

    This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…

  6. Words Spoken with Insistence: "Wak'as" and the Limits of the Bolivian Multi-Institutional Democracy

    ERIC Educational Resources Information Center

    Cuelenaere, Laurence Janine

    2009-01-01

    Building on 18 months of fieldwork in the Bolivian highlands, this dissertation examines how traversing landscapes, through the mediation of spatial practices and spoken words, are embedded in systems of belief. By focusing on "wak'as" (i.e. sacred objects) and on how the inhabitants of the Altiplano relate to the Andean deities known as…

  7. Development of brain networks involved in spoken word processing of Mandarin Chinese.

    PubMed

    Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R

    2011-08-01

    Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.

  8. A task-dependent causal role for low-level visual processes in spoken word comprehension.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-08-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Polling the effective neighborhoods of spoken words with the verbal transformation effect.

    PubMed

    Bashford, James A; Warren, Richard M; Lenz, Peter W

    2006-04-01

    Studies of the effects of lexical neighbors upon the recognition of spoken words have generally assumed that the most salient competitors differ by a single phoneme. The present study employs a procedure that induces the listeners to perceive and call out the salient competitors. By presenting a recording of a monosyllable repeated over and over, perceptual adaptation is produced, and perception of the stimulus is replaced by perception of a competitor. Reports from groups of subjects were obtained for monosyllables that vary in their frequency-weighted neighborhood density. The findings are compared with predictions based upon the neighborhood activation model.

  10. The role of visual representations during the lexical access of spoken words

    PubMed Central

    Lewis, Gwyneth; Poeppel, David

    2015-01-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability - concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. PMID:24814579

  11. The role of visual representations during the lexical access of spoken words.

    PubMed

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. "Brief report: increase in production of spoken words in some children with autism after PECS teaching to Phase III".

    PubMed

    Carr, Deborah; Felce, Janet

    2007-04-01

    The context for this work was an evaluation study [Carr, D., & Felce, J. A. (in press)] of the early phases of the Picture Exchange Communication System (PECS) [Frost, L. A., & Bondy, A. S. (1994). The picture exchange communication system training manual. Cherry Hill, NJ: Pyramid Educational Consultants, Inc.; Frost, L. A., & Bondy, A. S. (2004). The picture exchange communication system training manual, 2nd edn. Newark, DE: Pyramid Educational Consultants, Inc.]. This paper reports that five of 24 children who received 15 h of PECS teaching towards Phase III over a period of 4-5 weeks, showed concomitant increases in speech production, either in initiating communication with staff or in responding, or both. No children in the PECS group demonstrated a decrease in spoken words after receiving PECS teaching. In the control group, only one of 17 children demonstrated a minimal increase and four of 17 children demonstrated a decrease in use of spoken words after a similar period without PECS teaching.

  13. Disentangling fast and slow attentional influences of negative and taboo spoken words in the emotional Stroop paradigm.

    PubMed

    Bertels, Julie; Kolinsky, Régine

    2016-09-01

    Although the influence of the emotional content of stimuli on attention has been considered as occurring within trial, recent studies revealed that the presentation of such stimuli would also involve a slow component. The aim of the present study was to investigate fast and slow effects of negative (Exp. 1) and taboo (Exp. 2) spoken words. For this purpose, we used an auditory variant of the emotional Stroop paradigm in which each emotional word was followed by a sequence of neutral words. Replicating results from our previous study, we observed slow but no fast effects of negative and taboo words, which we interpreted as reflecting difficulties to disengage attention from their emotional dimension. Interestingly, while the presentation of a negative word only delayed the processing of the immediately subsequent neutral word, slow effects of taboo words were long-lasting. Nevertheless, such attentional effects were only observed when the emotional words were presented in the first block of trials, suggesting that once participants develop strategies to perform the task, attention-grabbing effects of emotional words disappear. Hence, far from being automatic, the occurrence of these effects would depend on participants' attentional set.

  14. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception

    PubMed Central

    Liebenthal, Einat; Silbersweig, David A.; Stern, Emily

    2016-01-01

    Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala—a subcortical center for emotion perception—are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states. PMID

  15. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception.

    PubMed

    Liebenthal, Einat; Silbersweig, David A; Stern, Emily

    2016-01-01

    Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.

  16. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    PubMed

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Children’s Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

    PubMed Central

    Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P.; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik

    2016-01-01

    Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language. PMID:26834665

  18. The Temporal Structure of Spoken Language Understanding.

    ERIC Educational Resources Information Center

    Marslen-Wilson, William; Tyler, Lorraine Komisarjevsky

    1980-01-01

    An investigation of word-by-word time-course of spoken language understanding focused on word recognition and structural and interpretative processes. Results supported an online interactive language processing theory, in which lexical, structural, and interpretative knowledge sources communicate and interact during processing efficiently and…

  19. Lexical Influences on Spoken Spondaic Word Recognition in Hearing-Impaired Patients

    PubMed Central

    Moulin, Annie; Richard, Céline

    2015-01-01

    when the occurrence frequencies were based on the years corresponding to the patients' youth, showing a “historic” word frequency effect. This effect was still observed for patients with few years of formal education, but recent occurrence frequencies based on current word exposure had a stronger influence for those patients, especially for younger ones. PMID:26778945

  20. The Activation of Embedded Words in Spoken Word Recognition.

    PubMed

    Zhang, Xujin; Samuel, Arthur G

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster ) indexed activation of the embedded words (e.g., ham ). When the listening conditions were optimal, isolated embedded words (e.g., ham ) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster ), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions.

  1. The Activation of Embedded Words in Spoken Word Recognition

    PubMed Central

    Zhang, Xujin; Samuel, Arthur G.

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions. PMID:25593407

  2. Phonological and syntactic competition effects in spoken word recognition: evidence from corpus-based statistics.

    PubMed

    Zhuang, Jie; Devereux, Barry J

    2017-02-07

    As spoken language unfolds over time the speech input transiently activates multiple candidates at different levels of the system - phonological, lexical, and syntactic - which in turn leads to short-lived between-candidate competition. In an fMRI study, we investigated how different kinds of linguistic competition may be modulated by the presence or absence of a prior context (Tyler 1984; Tyler et al. 2008). We found significant effects of lexico-phonological competition for isolated words, but not for words in short phrases, with high competition yielding greater activation in left inferior frontal gyrus (LIFG) and posterior temporal regions. This suggests that phrasal contexts reduce lexico-phonological competition by eliminating form-class inconsistent cohort candidates. A corpus-derived measure of lexico-syntactic competition was associated with greater activation in LIFG for verbs in phrases, but not for isolated verbs, indicating that lexico-syntactic information is boosted by the phrasal context. Together, these findings indicate that LIFG plays a general role in resolving different kinds of linguistic competition.

  3. The Acoustic Trigger to Conceptualization: An Hypothesis Concerning the Role of the Spoken Word in the Development of Higher Mental Processes.

    ERIC Educational Resources Information Center

    Dance, Frank E. X.

    One of many aspects of the linguistic centrality of the spoken word is the "acoustic trigger" to conceptualization, the most significant primal trigger in human beings, which when activated results in contrast and comparison leading to symbolic conceptualization. The oral/aural mode, or vocal production and acoustic perception, is developmentally…

  4. On-Line Orthographic Influences on Spoken Language in a Semantic Task

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Dufau, Stephane; Ziegler, Johannes C.

    2009-01-01

    Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a…

  5. Social interaction facilitates word learning in preverbal infants: Word-object mapping and word segmentation.

    PubMed

    Hakuno, Yoko; Omori, Takahide; Yamamoto, Jun-Ichi; Minagawa, Yasuyo

    2017-08-01

    In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word-object mapping remains elusive. We tested whether infants aged 5-6 months and 9-10 months could segment a word from continuous speech and acquire a word-object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word-object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?

    PubMed Central

    Coene, Martine; van der Lee, Anneke; Govaerts, Paul J.

    2015-01-01

    This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient's hearing impairment, to predict a patient's gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination. PMID:26557717

  7. Spoken Language and Mathematics.

    ERIC Educational Resources Information Center

    Raiker, Andrea

    2002-01-01

    States teachers/learners use spoken language in a three part mathematics lesson advocated by the British National Numeracy Strategy. Recognizes language's importance by emphasizing correct use of mathematical vocabulary in raising standards. Finds pupils and teachers appear to ascribe different meanings to scientific words because of their…

  8. Modeling open-set spoken word recognition in postlingually deafened adults after cochlear implantation: some preliminary results with the neighborhood activation model.

    PubMed

    Meyer, Ted A; Frisch, Stefan A; Pisoni, David B; Miyamoto, Richard T; Svirsky, Mario A

    2003-07-01

    Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener's lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener's closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly

  9. Modeling Open-Set Spoken Word Recognition in Postlingually Deafened Adults after Cochlear Implantation: Some Preliminary Results with the Neighborhood Activation Model

    PubMed Central

    Meyer, Ted A.; Frisch, Stefan A.; Pisoni, David B.; Miyamoto, Richard T.; Svirsky, Mario A.

    2012-01-01

    Hypotheses Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? Background The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener’s lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener’s closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Methods Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. Results The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test

  10. Memory traces for spoken words in the brain as revealed by the hemodynamic correlate of the mismatch negativity.

    PubMed

    Shtyrov, Yury; Osswald, Katja; Pulvermüller, Friedemann

    2008-01-01

    The mismatch negativity response, considered a brain correlate of automatic preattentive auditory processing, is enhanced for word stimuli as compared with acoustically matched pseudowords. This lexical enhancement, taken as a signature of activation of language-specific long-term memory traces, was investigated here using functional magnetic resonance imaging to complement the previous electrophysiological studies. In passive oddball paradigm, word stimuli were randomly presented as rare deviants among frequent pseudowords; the reverse conditions employed infrequent pseudowords among word stimuli. Random-effect analysis indicated clearly distinct patterns for the different lexical types. Whereas the hemodynamic mismatch response was significant for the word deviants, it did not reach significance for the pseudoword conditions. This difference, more pronounced in the left than right hemisphere, was also assessed by analyzing average parameter estimates in regions of interests within both temporal lobes. A significant hemisphere-by-lexicality interaction confirmed stronger blood oxygenation level-dependent mismatch responses to words than pseudowords in the left but not in the right superior temporal cortex. The increased left superior temporal activation and the laterality of cortical sources elicited by spoken words compared with pseudowords may indicate the activation of cortical circuits for lexical material even in passive oddball conditions and suggest involvement of the left superior temporal areas in housing such word-processing neuronal circuits.

  11. Feature Statistics Modulate the Activation of Meaning During Spoken Word Processing.

    PubMed

    Devereux, Barry J; Taylor, Kirsten I; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K

    2016-03-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co-occurrence (correlational strength)--determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech-to-meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time-sensitive co-occurrence-driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general-to-specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation. Copyright © 2015 The Authors. Cognitive Science published by Cognitive Science Society, Inc.

  12. The word-length effect and disyllabic words.

    PubMed

    Lovatt, P; Avons, S E; Masterson, J

    2000-02-01

    Three experiments compared immediate serial recall of disyllabic words that differed on spoken duration. Two sets of long- and short-duration words were selected, in each case maximizing duration differences but matching for frequency, familiarity, phonological similarity, and number of phonemes, and controlling for semantic associations. Serial recall measures were obtained using auditory and visual presentation and spoken and picture-pointing recall. In Experiments 1a and 1b, using the first set of items, long words were better recalled than short words. In Experiments 2a and 2b, using the second set of items, no difference was found between long and short disyllabic words. Experiment 3 confirmed the large advantage for short-duration words in the word set originally selected by Baddeley, Thomson, and Buchanan (1975). These findings suggest that there is no reliable advantage for short-duration disyllables in span tasks, and that previous accounts of a word-length effect in disyllables are based on accidental differences between list items. The failure to find an effect of word duration casts doubt on theories that propose that the capacity of memory span is determined by the duration of list items or the decay rate of phonological information in short-term memory.

  13. Finding Words in a Language that Allows Words without Vowels

    ERIC Educational Resources Information Center

    El Aissati, Abder; McQueen, James M.; Cutler, Anne

    2012-01-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring "win" in "twin" because "t" cannot be a word). However, the constraint would be counter-productive in…

  14. Spoken Idiom Recognition: Meaning Retrieval and Word Expectancy

    ERIC Educational Resources Information Center

    Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou

    2005-01-01

    This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…

  15. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  16. Hearing taboo words can result in early talker effects in word recognition for female listeners.

    PubMed

    Tuft, Samantha E; MᶜLennan, Conor T; Krestar, Maura L

    2018-02-01

    Previous spoken word recognition research using the long-term repetition-priming paradigm found performance costs for stimuli mismatching in talker identity. That is, when words were repeated across the two blocks, and the identity of the talker changed reaction times (RTs) were slower than when the repeated words were spoken by the same talker. Such performance costs, or talker effects, followed a time course, occurring only when processing was relatively slow. More recent research suggests that increased explicit and implicit attention towards the talkers can result in talker effects even during relatively fast processing. The purpose of the current study was to examine whether word meaning would influence the pattern of talker effects in an easy lexical decision task and, if so, whether results would differ depending on whether the presentation of neutral and taboo words was mixed or blocked. Regardless of presentation, participants responded to taboo words faster than neutral words. Furthermore, talker effects for the female talker emerged when participants heard both taboo and neutral words (consistent with an attention-based hypothesis), but not for participants that heard only taboo or only neutral words (consistent with the time-course hypothesis). These findings have important implications for theoretical models of spoken word recognition.

  17. The Developing Role of Prosody in Novel Word Interpretation

    ERIC Educational Resources Information Center

    Herold, Debora S.; Nygaard, Lynne C.; Chicos, Kelly A.; Namy, Laura L.

    2011-01-01

    This study examined whether children use prosodic correlates to word meaning when interpreting novel words. For example, do children infer that a word spoken in a deep, slow, loud voice refers to something larger than a word spoken in a high, fast, quiet voice? Participants were 4- and 5-year-olds who viewed picture pairs that varied along a…

  18. Modulating the focus of attention for spoken words at encoding affects frontoparietal activation for incidental verbal memory.

    PubMed

    Christensen, Thomas A; Almryde, Kyle R; Fidler, Lesley J; Lockwood, Julie L; Antonucci, Sharon M; Plante, Elena

    2012-01-01

    Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as listeners heard a list of single English nouns. We then presented these words again in the context of a recognition task and assessed the effect of modulating attention at encoding on the BOLD responses to words that were either attended strongly, weakly, or not heard previously. MRI revealed activity in right-lateralized inferior parietal and prefrontal regions, and positive BOLD signals varied with the relative level of attention present at encoding. Temporal analysis of hemodynamic responses further showed that the time course of BOLD activity was modulated differentially by unintentionally encoded words compared to novel items. Our findings largely support current models of memory consolidation and retrieval, but they also provide fresh evidence for hemispheric differences and functional subdivisions in right frontoparietal attention networks that help shape auditory episodic recall.

  19. Modulating the Focus of Attention for Spoken Words at Encoding Affects Frontoparietal Activation for Incidental Verbal Memory

    PubMed Central

    Christensen, Thomas A.; Almryde, Kyle R.; Fidler, Lesley J.; Lockwood, Julie L.; Antonucci, Sharon M.; Plante, Elena

    2012-01-01

    Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as listeners heard a list of single English nouns. We then presented these words again in the context of a recognition task and assessed the effect of modulating attention at encoding on the BOLD responses to words that were either attended strongly, weakly, or not heard previously. MRI revealed activity in right-lateralized inferior parietal and prefrontal regions, and positive BOLD signals varied with the relative level of attention present at encoding. Temporal analysis of hemodynamic responses further showed that the time course of BOLD activity was modulated differentially by unintentionally encoded words compared to novel items. Our findings largely support current models of memory consolidation and retrieval, but they also provide fresh evidence for hemispheric differences and functional subdivisions in right frontoparietal attention networks that help shape auditory episodic recall. PMID:22144982

  20. Communicating Emotion: Linking Affective Prosody and Word Meaning

    ERIC Educational Resources Information Center

    Nygaard, Lynne C.; Queen, Jennifer S.

    2008-01-01

    The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming…

  1. Do Low Income Youth of Color See "The Bigger Picture" When Discussing Type 2 Diabetes: A Qualitative Evaluation of a Public Health Literacy Campaign.

    PubMed

    Schillinger, Dean; Tran, Jessica; Fine, Sarah

    2018-04-24

    As Type 2 diabetes spikes among minority and low-income youth, there is an urgent need to tackle the drivers of this preventable disease. The Bigger Picture (TBP) is a counter-marketing campaign using youth-created, spoken-word public service announcements (PSAs) to reframe the epidemic as a socio-environmental phenomenon requiring communal action, civic engagement and norm change. We examined whether and how TBP PSAs advance health literacy among low-income, minority youth. We showed nine PSAs, asking individuals open-ended questions via questionnaire, then facilitating a focus group to reflect upon the PSAs. Questionnaire responses revealed a balance between individual vs. public health literacy. Some focused on individual responsibility and behaviors, while others described socio-environmental forces underlying risk. The focus group generated a preponderance of public health literacy responses, emphasizing future action. Striking sociopolitical themes emerged, reflecting tensions minority and low-income youth experience, such as entrapment vs. liberation. Our findings speak to the structural barriers and complexities underlying diabetes risk, and the ability of spoken word medium to make these challenges visible and motivate action. Delivering TBP content to promote interactive reflection has potential to change behavioral norms and build capacity to confront the social, economic and structural factors that influence behaviors.

  2. How long-term memory and accentuation interact during spoken language comprehension.

    PubMed

    Li, Xiaoqing; Yang, Yufang

    2013-04-01

    Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words). Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Neural systems underlying the influence of sound shape properties of the lexicon on spoken word production: do fMRI findings predict effects of lesions in aphasia?

    PubMed

    Bullock-Rest, Natasha; Cerny, Alissa; Sweeney, Carol; Palumbo, Carole; Kurowski, Kathleen; Blumstein, Sheila E

    2013-08-01

    Previous behavioral work has shown that the phonetic realization of words in spoken word production is influenced by sound shape properties of the lexicon. A recent fMRI study (Peramunage, Blumstein, Myers, Goldrick, & Baese-Berk, 2011) showed that this influence of lexical structure on phonetic implementation recruited a network of areas that included the supramarginal gyrus (SMG) extending into the posterior superior temporal gyrus (pSTG) and the inferior frontal gyrus (IFG). The current study examined whether lesions in these areas result in a concomitant functional deficit. Ten individuals with aphasia and 8 normal controls read words aloud in which half had a voiced stop consonant minimal pair (e.g. tame; dame), and the other half did not (e.g. tooth; (*)dooth). Voice onset time (VOT) analysis of the initial voiceless stop consonant revealed that aphasic participants with lesions including the IFG and/or the SMG behaved as did normals, showing VOT lengthening effects for minimal pair words compared to non-minimal pair words. The failure to show a functional deficit in the production of VOT as a function of the lexical properties of a word with damage in the IFG or SMG suggests that fMRI findings do not always predict effects of lesions on behavioral deficits in aphasia. Nonetheless, the pattern of production errors made by the aphasic participants did reflect properties of the lexicon, supporting the view that the SMG and IFG are part of a lexical network involved in spoken word production. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Do Low Income Youth of Color See “The Bigger Picture” When Discussing Type 2 Diabetes: A Qualitative Evaluation of a Public Health Literacy Campaign

    PubMed Central

    Schillinger, Dean; Tran, Jessica; Fine, Sarah

    2018-01-01

    As Type 2 diabetes spikes among minority and low-income youth, there is an urgent need to tackle the drivers of this preventable disease. The Bigger Picture (TBP) is a counter-marketing campaign using youth-created, spoken-word public service announcements (PSAs) to reframe the epidemic as a socio-environmental phenomenon requiring communal action, civic engagement and norm change. Methods: We examined whether and how TBP PSAs advance health literacy among low-income, minority youth. We showed nine PSAs, asking individuals open-ended questions via questionnaire, then facilitating a focus group to reflect upon the PSAs. Results: Questionnaire responses revealed a balance between individual vs. public health literacy. Some focused on individual responsibility and behaviors, while others described socio-environmental forces underlying risk. The focus group generated a preponderance of public health literacy responses, emphasizing future action. Striking sociopolitical themes emerged, reflecting tensions minority and low-income youth experience, such as entrapment vs. liberation. Conclusion: Our findings speak to the structural barriers and complexities underlying diabetes risk, and the ability of spoken word medium to make these challenges visible and motivate action. Practice Implications: Delivering TBP content to promote interactive reflection has potential to change behavioral norms and build capacity to confront the social, economic and structural factors that influence behaviors. PMID:29695114

  5. The Beat of Boyle Street: empowering Aboriginal youth through music making.

    PubMed

    Wang, Elaine L

    2010-01-01

    An irrepressibly popular musical phenomenon, hip-hop is close to spoken word and focuses on lyrics with a message, reviving local traditions of song that tell histories, counsel listeners, and challenge participants to outdo one another in clever exchanges. A hip-hop music-making program in Edmonton, Canada, successfully reengages at-risk Aboriginal youth in school with high levels of desertion and helps them establish a healthy sense of self and of their identity as Aboriginals.

  6. Does an attention bias to appetitive and aversive words modulate interference control in youth with ADHD?

    PubMed

    Ma, Ili; Mies, Gabry W; Lambregts-Rommelse, Nanda N J; Buitelaar, Jan K; Cillessen, Antonius H N; Scheres, Anouk

    2018-05-01

    Interference control refers to the ability to selectively attend to certain information while ignoring distracting information. This ability can vary as a function of distractor relevance. Distractors that are particularly relevant to an individual may attract more attention than less relevant distractors. This is referred to as attention bias. Weak interference control and altered reward sensitivity are both important features of attention deficit hyperactivity disorder (ADHD). However, interference control is typically studied in isolation. This study integrates both. Youths (aged 9 to 17 years) with ADHD (n = 37, 25 boys) and typically-developing controls (n = 38, 20 boys) completed a Stroop task using appetitive words and matched neutral words to assess whether appetitive distractors diminished interference control more in youths with ADHD than controls. In order to test for specificity, aversive words were also included. As expected, appetitive words disrupted interference control but this effect was not stronger for youths with ADHD than the controls. Aversive words, on the other hand, facilitated interference control. Dimensional analyses revealed that this facilitation effect increased substantially as a function of ADHD symptom severity. Possible mechanisms for this effect include up-regulation of interference control as a function of induced negative mood, or as a function of increased effort. In conclusion, appetitive words do not lead to worse interference control in youths with ADHD compared with controls. Interference control was modulated in a valence-specific manner, concurrent with mood-induced effects on cognitive control.

  7. Electrophysiological Responses to Coarticulatory and Word Level Miscues

    ERIC Educational Resources Information Center

    Archibald, Lisa M. D.; Joanisse, Marc F.

    2011-01-01

    The influence of coarticulation cues on spoken word recognition is not yet well understood. This acoustic/phonetic variation may be processed early and recognized as sensory noise to be stripped away, or it may influence processing at a later prelexical stage. The present study used event-related potentials (ERPs) in a picture/spoken word matching…

  8. Effects of Rhyme and Spelling Patterns on Auditory Word ERPs Depend on Selective Attention to Phonology

    ERIC Educational Resources Information Center

    Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.

    2013-01-01

    ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…

  9. The locus of word frequency effects in skilled spelling-to-dictation.

    PubMed

    Chua, Shi Min; Liow, Susan J Rickard

    2014-01-01

    In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.

  10. Talker familiarity and spoken word recognition in school-age children*

    PubMed Central

    Levi, Susannah V.

    2014-01-01

    Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers’ voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German–English bilingual talkers and were tested on the speech of six bilinguals, three of whom were familiar. Results revealed that children do show improved spoken language processing when they are familiar with the talkers, but this improvement was limited to highly familiar lexical items. This restriction of the familiar talker advantage is attributed to differences in the representation of highly familiar and less familiar lexical items. In addition, children did not exhibit accent-general learning; despite having been exposed to German-accented talkers during training, there was no improvement for novel German-accented talkers. PMID:25159173

  11. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  12. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    PubMed Central

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  13. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    NASA Astrophysics Data System (ADS)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  14. The impact of music on learning and consolidation of novel words.

    PubMed

    Tamminen, Jakke; Rastle, Kathleen; Darby, Jess; Lucas, Rebecca; Williamson, Victoria J

    2017-01-01

    Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.

  15. Attention Demands of Spoken Word Planning: A Review

    PubMed Central

    Roelofs, Ardi; Piai, Vitória

    2011-01-01

    Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot proceed without paying some form of attention. Here, we review evidence that word planning requires some but not full attention. The evidence comes from chronometric studies of word planning in picture naming and word reading under divided attention conditions. It is generally assumed that the central attention demands of a process are indexed by the extent that the process delays the performance of a concurrent unrelated task. The studies measured the speed and accuracy of linguistic and non-linguistic responding as well as eye gaze durations reflecting the allocation of attention. First, empirical evidence indicates that in several task situations, processes up to and including phonological encoding in word planning delay, or are delayed by, the performance of concurrent unrelated non-linguistic tasks. These findings suggest that word planning requires central attention. Second, empirical evidence indicates that conflicts in word planning may be resolved while concurrently performing an unrelated non-linguistic task, making a task decision, or making a go/no-go decision. These findings suggest that word planning does not require full central attention. We outline a computationally implemented theory of attention and word planning, and describe at various points the outcomes of computer simulations that demonstrate the utility of the theory in accounting for the key findings. Finally, we indicate how attention deficits may contribute to impaired language performance, such as in individuals with specific language impairment. PMID:22069393

  16. Phantom Word Activation in L2

    ERIC Educational Resources Information Center

    Broersma, Mirjam; Cutler, Anne

    2008-01-01

    L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two…

  17. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    PubMed

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Executive Functions Contribute Uniquely to Reading Competence in Minority Youth

    PubMed Central

    Jacobson, Lisa A.; Koriakin, Taylor; Lipkin, Paul; Boada, Richard; Frijters, Jan; Lovett, Maureen; Hill, Dina; Willcutt, Erik; Gottwald, Stephanie; Wolf, Maryanne; Bosson-Heenan, Joan; Gruen, Jeffrey R.; Mahone, E. Mark

    2018-01-01

    Competent reading requires various skills beyond those for basic word reading (i.e., core language skills, rapid naming, phonological processing). Contributing “higher-level” or domain-general processes include information processing speed and executive functions (working memory, strategic problem solving, attentional switching). Research in this area has relied on largely Caucasian samples, with limited representation of children from racial or ethnic minority groups. This study examined contributions of executive skills to reading competence in 761 children of minority backgrounds. Hierarchical linear regressions examined unique contributions of executive functions (EF) to word reading, fluency, and comprehension. EF contributed uniquely to reading performance, over and above reading-related language skills; working memory contributed uniquely to all components of reading; while attentional switching, but not problem solving, contributed to isolated and contextual word reading and reading fluency. Problem solving uniquely predicted comprehension, suggesting that this skill may be especially important for reading comprehension in minority youth. Attentional switching may play a unique role in development of reading fluency in minority youth, perhaps as a result of the increased demand for switching between spoken versus written dialects. Findings have implications for educational and clinical practice with regard to reading instruction, remedial reading intervention, and assessment of individuals with reading difficulty. PMID:26755569

  19. Executive Functions Contribute Uniquely to Reading Competence in Minority Youth.

    PubMed

    Jacobson, Lisa A; Koriakin, Taylor; Lipkin, Paul; Boada, Richard; Frijters, Jan C; Lovett, Maureen W; Hill, Dina; Willcutt, Erik; Gottwald, Stephanie; Wolf, Maryanne; Bosson-Heenan, Joan; Gruen, Jeffrey R; Mahone, E Mark

    Competent reading requires various skills beyond those for basic word reading (i.e., core language skills, rapid naming, phonological processing). Contributing "higher-level" or domain-general processes include information processing speed and executive functions (working memory, strategic problem solving, attentional switching). Research in this area has relied on largely Caucasian samples, with limited representation of children from racial or ethnic minority groups. This study examined contributions of executive skills to reading competence in 761 children of minority backgrounds. Hierarchical linear regressions examined unique contributions of executive functions (EF) to word reading, fluency, and comprehension. EF contributed uniquely to reading performance, over and above reading-related language skills; working memory contributed uniquely to all components of reading; while attentional switching, but not problem solving, contributed to isolated and contextual word reading and reading fluency. Problem solving uniquely predicted comprehension, suggesting that this skill may be especially important for reading comprehension in minority youth. Attentional switching may play a unique role in development of reading fluency in minority youth, perhaps as a result of the increased demand for switching between spoken versus written dialects. Findings have implications for educational and clinical practice with regard to reading instruction, remedial reading intervention, and assessment of individuals with reading difficulty.

  20. The interaction of lexical tone, intonation and semantic context in on-line spoken word recognition: an ERP study on Cantonese Chinese.

    PubMed

    Kung, Carmen; Chwilla, Dorothee J; Schriefers, Herbert

    2014-01-01

    In two ERP experiments, we investigate the on-line interplay of lexical tone, intonation and semantic context during spoken word recognition in Cantonese Chinese. Experiment 1 shows that lexical tone and intonation interact immediately. Words with a low lexical tone at the end of questions (with a rising question intonation) lead to a processing conflict. This is reflected in a low accuracy in lexical identification and in a P600 effect compared to the same words at the end of a statement. Experiment 2 shows that a strongly biasing semantic context leads to much better lexical-identification performance for words with a low tone at the end of questions and to a disappearance of the P600 effect. These results support the claim that semantic context plays a major role in disentangling the tonal information from the intonational information, and thus, in resolving the on-line conflict between intonation and tone. However, the ERP data indicate that the introduction of a semantic context does not entirely eliminate on-line processing problems for words at the end of questions. This is revealed by the presence of an N400 effect for words with a low lexical tone and for words with a high-mid lexical tone at the end of questions. The ERP data thus show that, while semantic context helps in the eventual lexical identification, it makes the deviation of the contextually expected lexical tone from the actual acoustic signal more salient. © 2013 Published by Elsevier Ltd.

  1. Production Is Only Half the Story - First Words in Two East African Languages.

    PubMed

    Alcock, Katherine J

    2017-01-01

    Theories of early learning of nouns in children's vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8-20 months were interviewed using Communicative Development Inventories that assess infants' first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75-95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children's spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children's comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language.

  2. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  3. Segmentation of Written Words in French

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Content, Alain

    2013-01-01

    Syllabification of spoken words has been largely used to define syllabic properties of written words, such as the number of syllables or syllabic boundaries. By contrast, some authors proposed that the functional structure of written words stems from visuo-orthographic features rather than from the transposition of phonological structure into the…

  4. The effects of sad prosody on hemispheric specialization for words processing.

    PubMed

    Leshem, Rotem; Arzouan, Yossi; Armony-Sivan, Rinat

    2015-06-01

    This study examined the effect of sad prosody on hemispheric specialization for word processing using behavioral and electrophysiological measures. A dichotic listening task combining focused attention and signal-detection methods was conducted to evaluate the detection of a word spoken in neutral or sad prosody. An overall right ear advantage together with leftward lateralization in early (150-170 ms) and late (240-260 ms) processing stages was found for word detection, regardless of prosody. Furthermore, the early stage was most pronounced for words spoken in neutral prosody, showing greater negative activation over the left than the right hemisphere. In contrast, the later stage was most pronounced for words spoken with sad prosody, showing greater positive activation over the left than the right hemisphere. The findings suggest that sad prosody alone was not sufficient to modulate hemispheric asymmetry in word-level processing. We posit that lateralized effects of sad prosody on word processing are largely dependent on the psychoacoustic features of the stimuli as well as on task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Repeated imitation makes human vocalizations more word-like.

    PubMed

    Edmiston, Pierce; Perlman, Marcus; Lupyan, Gary

    2018-03-14

    People have long pondered the evolution of language and the origin of words. Here, we investigate how conventional spoken words might emerge from imitations of environmental sounds. Does the repeated imitation of an environmental sound gradually give rise to more word-like forms? In what ways do these forms resemble the original sounds that motivated them (i.e. exhibit iconicity)? Participants played a version of the children's game 'Telephone'. The first generation of participants imitated recognizable environmental sounds (e.g. glass breaking, water splashing). Subsequent generations imitated the previous generation of imitations for a maximum of eight generations. The results showed that the imitations became more stable and word-like, and later imitations were easier to learn as category labels. At the same time, even after eight generations, both spoken imitations and their written transcriptions could be matched above chance to the category of environmental sound that motivated them. These results show how repeated imitation can create progressively more word-like forms while continuing to retain a resemblance to the original sound that motivated them, and speak to the possible role of human vocal imitation in explaining the origins of at least some spoken words. © 2018 The Author(s).

  6. The Effect of Talker Variability on Word Recognition in Preschool Children

    PubMed Central

    Ryalls, Brigette Oliver; Pisoni, David B.

    2012-01-01

    In a series of experiments, the authors investigated the effects of talker variability on children’s word recognition. In Experiment 1, when stimuli were presented in the clear, 3- and 5-year-olds were less accurate at identifying words spoken by multiple talkers than those spoken by a single talker when the multiple-talker list was presented first. In Experiment 2, when words were presented in noise, 3-, 4-, and 5-year-olds again performed worse in the multiple-talker condition than in the single-talker condition, this time regardless of order; processing multiple talkers became easier with age. Experiment 3 showed that both children and adults were slower to repeat words from multiple-talker than those from single-talker lists. More important, children (but not adults) matched acoustic properties of the stimuli (specifically, duration). These results provide important new information about the development of talker normalization in speech perception and spoken word recognition. PMID:9149923

  7. Tracking the Time Course of Word-Frequency Effects in Auditory Word Recognition with Event-Related Potentials

    ERIC Educational Resources Information Center

    Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.

    2013-01-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…

  8. Production Is Only Half the Story — First Words in Two East African Languages

    PubMed Central

    Alcock, Katherine J.

    2017-01-01

    Theories of early learning of nouns in children’s vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8–20 months were interviewed using Communicative Development Inventories that assess infants’ first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75–95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children’s spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children’s comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language. PMID:29163280

  9. "Daddy, Where Did the Words Go?" How Teachers Can Help Emergent Readers Develop a Concept of Word in Text

    ERIC Educational Resources Information Center

    Flanigan, Kevin

    2006-01-01

    This article focuses on a concept that has rarely been studied in beginning reading research--a child's concept of word in text. Recent examinations of this phenomenon suggest that a child's ability to match spoken words to written words while reading--a concept of word in text--plays a pivotal role in early reading development. In this article,…

  10. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    PubMed

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Action and object word writing in a case of bilingual aphasia.

    PubMed

    Kambanaros, Maria; Messinis, Lambros; Anyfantis, Emmanouil

    2012-01-01

    We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.

  12. Rank-frequency distributions of Romanian words

    NASA Astrophysics Data System (ADS)

    Cocioceanu, Adrian; Raportaru, Carina Mihaela; Nicolin, Alexandru I.; Jakimovski, Dragan

    2017-12-01

    The calibration of voice biometrics solutions requires detailed analyses of spoken texts and in this context we investigate by computational means the rank-frequency distributions of Romanian words and word series to determine the most common words and word series of the language. To this end, we have constructed a corpus of approximately 2.5 million words and then determined that the rank-frequency distributions of the Romanian words, as well as series of two, and three subsequent words, obey the celebrated Zipf law.

  13. Interference Effects on the Recall of Pictures, Printed Words and Spoken Words.

    ERIC Educational Resources Information Center

    Burton, John K.; Bruning, Roger H.

    Thirty college undergraduates participated in a study of the effects of acoustic and visual interference on the recall of word and picture triads in both short-term and long-term memory. The subjects were presented 24 triads of monosyllabic nouns representing all of the possible combinations of presentation types: pictures, printed words, and…

  14. Words Get in the Way: Linguistic Effects on Talker Discrimination.

    PubMed

    Narayan, Chandan R; Mak, Lorinda; Bialystok, Ellen

    2017-07-01

    A speech perception experiment provides evidence that the linguistic relationship between words affects the discrimination of their talkers. Listeners discriminated two talkers' voices with various linguistic relationships between their spoken words. Listeners were asked whether two words were spoken by the same person or not. Word pairs varied with respect to the linguistic relationship between the component words, forming either: phonological rhymes, lexical compounds, reversed compounds, or unrelated pairs. The degree of linguistic relationship between the words affected talker discrimination in a graded fashion, revealing biases listeners have regarding the nature of words and the talkers that speak them. These results indicate that listeners expect a talker's words to be linguistically related, and more generally, indexical processing is affected by linguistic information in a top-down fashion even when listeners are not told to attend to it. Copyright © 2016 Cognitive Science Society, Inc.

  15. Spoken Words. Technical Report No. 177.

    ERIC Educational Resources Information Center

    Hall, William S.; And Others

    The word frequency lists presented in this publication were compiled to create a database for further research into vocabulary use, especially the variation in vocabulary due to differences in situation and social group membership. Taken from the natural conversations of 40 target children (four and a half to five years old) with their families,…

  16. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    ERIC Educational Resources Information Center

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  17. A dual contribution to the involuntary semantic processing of unexpected spoken words.

    PubMed

    Parmentier, Fabrice B R; Turner, Jacqueline; Perez, Laura

    2014-02-01

    Sounds are a major cause of distraction. Unexpected to-be-ignored auditory stimuli presented in the context of an otherwise repetitive acoustic background ineluctably break through selective attention and distract people from an unrelated visual task (deviance distraction). This involuntary capture of attention by deviant sounds has been hypothesized to trigger their semantic appraisal and, in some circumstances, interfere with ongoing performance, but it remains unclear how such processing compares with the automatic processing of distractors in classic interference tasks (e.g., Stroop, flanker, Simon tasks). Using a cross-modal oddball task, we assessed the involuntary semantic processing of deviant sounds in the presence and absence of deviance distraction. The results revealed that some involuntary semantic analysis of spoken distractors occurs in the absence of deviance distraction but that this processing is significantly greater in its presence. We conclude that the automatic processing of spoken distractors reflects 2 contributions, one that is contingent upon deviance distraction and one that is independent from it.

  18. The employment of a spoken language computer applied to an air traffic control task.

    NASA Technical Reports Server (NTRS)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  19. Time course of syllabic and sub-syllabic processing in Mandarin word production: Evidence from the picture-word interference paradigm.

    PubMed

    Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2017-06-05

    The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.

  20. The Perception of Assimilation in Newly Learned Novel Words

    ERIC Educational Resources Information Center

    Snoeren, Natalie D.; Gaskell, M. Gareth; Di Betta, Anna Maria

    2009-01-01

    The present study investigated the mechanisms underlying perceptual compensation for assimilation in novel words. During training, participants learned canonical versions of novel spoken words (e.g., "decibot") presented in isolation. Following exposure to a second set of novel words the next day, participants carried out a phoneme…

  1. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    PubMed

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  2. Locus of word frequency effects in spelling to dictation: Still at the orthographic level!

    PubMed

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-11-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. When Half a Word Is Enough: Infants Can Recognize Spoken Words Using Partial Phonetic Information.

    ERIC Educational Resources Information Center

    Fernald, Anne; Swingley, Daniel; Pinto, John P.

    2001-01-01

    Two experiments tracked infants' eye movements to examine use of word-initial information to understand fluent speech. Results indicated that 21- and 18-month-olds recognized partial words as quickly and reliably as whole words. Infants' productive vocabulary and reaction time were related to word recognition accuracy. Results show that…

  4. Lexical frequency and acoustic reduction in spoken Dutch

    NASA Astrophysics Data System (ADS)

    Pluymaekers, Mark; Ernestus, Mirjam; Baayen, R. Harald

    2005-10-01

    This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.

  5. Fast mapping semantic features: performance of adults with normal language, history of disorders of spoken and written language, and attention deficit hyperactivity disorder on a word-learning task.

    PubMed

    Alt, Mary; Gutmann, Michelle L

    2009-01-01

    This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.

  6. Visual words for lip-reading

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad B. A.; Jassim, Sabah

    2010-04-01

    In this paper, the automatic lip reading problem is investigated, and an innovative approach to providing solutions to this problem has been proposed. This new VSR approach is dependent on the signature of the word itself, which is obtained from a hybrid feature extraction method dependent on geometric, appearance, and image transform features. The proposed VSR approach is termed "visual words". The visual words approach consists of two main parts, 1) Feature extraction/selection, and 2) Visual speech feature recognition. After localizing face and lips, several visual features for the lips where extracted. Such as the height and width of the mouth, mutual information and the quality measurement between the DWT of the current ROI and the DWT of the previous ROI, the ratio of vertical to horizontal features taken from DWT of ROI, The ratio of vertical edges to horizontal edges of ROI, the appearance of the tongue and the appearance of teeth. Each spoken word is represented by 8 signals, one of each feature. Those signals maintain the dynamic of the spoken word, which contains a good portion of information. The system is then trained on these features using the KNN and DTW. This approach has been evaluated using a large database for different people, and large experiment sets. The evaluation has proved the visual words efficiency, and shown that the VSR is a speaker dependent problem.

  7. Does Hearing Several Speakers Reduce Foreign Word Learning?

    ERIC Educational Resources Information Center

    Ludington, Jason Darryl

    2016-01-01

    Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…

  8. Orthographic Consistency and Word-Frequency Effects in Auditory Word Recognition: New Evidence from Lexical Decision and Rime Detection

    PubMed Central

    Petrova, Ana; Gaskell, M. Gareth; Ferrand, Ludovic

    2011-01-01

    Many studies have repeatedly shown an orthographic consistency effect in the auditory lexical decision task. Words with phonological rimes that could be spelled in multiple ways (i.e., inconsistent words) typically produce longer auditory lexical decision latencies and more errors than do words with rimes that could be spelled in only one way (i.e., consistent words). These results have been extended to different languages and tasks, suggesting that the effect is quite general and robust. Despite this growing body of evidence, some psycholinguists believe that orthographic effects on spoken language are exclusively strategic, post-lexical, or restricted to peculiar (low-frequency) words. In the present study, we manipulated consistency and word-frequency orthogonally in order to explore whether the orthographic consistency effect extends to high-frequency words. Two different tasks were used: lexical decision and rime detection. Both tasks produced reliable consistency effects for both low- and high-frequency words. Furthermore, in Experiment 1 (lexical decision), an interaction revealed a stronger consistency effect for low-frequency words than for high-frequency words, as initially predicted by Ziegler and Ferrand (1998), whereas no interaction was found in Experiment 2 (rime detection). Our results extend previous findings by showing that the orthographic consistency effect is obtained not only for low-frequency words but also for high-frequency words. Furthermore, these effects were also obtained in a rime detection task, which does not require the explicit processing of orthographic structure. Globally, our results suggest that literacy changes the way people process spoken words, even for frequent words. PMID:22025916

  9. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals

    PubMed Central

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000

  10. The Neural Basis of Competition in Auditory Word Recognition and Spoken Word Production

    ERIC Educational Resources Information Center

    Righi, Giulia

    2010-01-01

    The goal of this dissertation is to examine how brain regions respond to different types of competition during word comprehension and word production. I will present three studies that attempt to enhance the current understanding of which brain regions are sensitive to different aspects of competition and how the nature of the stimuli and the…

  11. Temporal lobe networks supporting the comprehension of spoken words.

    PubMed

    Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius

    2017-09-01

    Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University

  12. Spoken Grammar Practice and Feedback in an ASR-Based CALL System

    ERIC Educational Resources Information Center

    de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland

    2015-01-01

    Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…

  13. Influences of High and Low Variability on Infant Word Recognition

    ERIC Educational Resources Information Center

    Singh, Leher

    2008-01-01

    Although infants begin to encode and track novel words in fluent speech by 7.5 months, their ability to recognize words is somewhat limited at this stage. In particular, when the surface form of a word is altered, by changing the gender or affective prosody of the speaker, infants begin to falter at spoken word recognition. Given that natural…

  14. Theories of Spoken Word Recognition Deficits in Aphasia: Evidence from Eye-Tracking and Computational Modeling

    PubMed Central

    Mirman, Daniel; Yee, Eiling; Blumstein, Sheila E.; Magnuson, James S.

    2011-01-01

    We used eye tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot – parrot) and cohort (e.g., beaker – beetle) competitors. Broca’s aphasic participants exhibited larger rhyme competition effects than age-matched controls. A reanalysis of previously reported data (Yee, Blumstein, & Sedivy, 2008) confirmed that Wernicke’s aphasic participants exhibited larger cohort competition effects. Individual-level analyses revealed a negative correlation between rhyme and cohort competition effect size across both groups of aphasic participants. Computational model simulations were performed to examine which of several accounts of lexical processing deficits in aphasia might account for the observed effects. Simulation results revealed that slower deactivation of lexical competitors could account for increased cohort competition in Wernicke’s aphasic participants; auditory perceptual impairment could account for increased rhyme competition in Broca's aphasic participants; and a perturbation of a parameter controlling selection among competing alternatives could account for both patterns, as well as the correlation between the effects. In light of these simulation results, we discuss theoretical accounts that have the potential to explain the dynamics of spoken word recognition in aphasia and the possible roles of anterior and posterior brain regions in lexical processing and cognitive control. PMID:21371743

  15. Damage to temporo-parietal cortex decreases incidental activation of thematic relations during spoken word comprehension

    PubMed Central

    Mirman, Daniel; Graziano, Kristen M.

    2012-01-01

    Both taxonomic and thematic semantic relations have been studied extensively in behavioral studies and there is an emerging consensus that the anterior temporal lobe plays a particularly important role in the representation and processing of taxonomic relations, but the neural basis of thematic semantics is less clear. We used eye tracking to examine incidental activation of taxonomic and thematic relations during spoken word comprehension in participants with aphasia. Three groups of participants were tested: neurologically intact control participants (N=14), individuals with aphasia resulting from lesions in left hemisphere BA 39 and surrounding temporo-parietal cortex regions (N=7), and individuals with the same degree of aphasia severity and semantic impairment and anterior left hemisphere lesions (primarily inferior frontal gyrus and anterior temporal lobe) that spared BA 39 (N=6). The posterior lesion group showed reduced and delayed activation of thematic relations, but not taxonomic relations. In contrast, the anterior lesion group exhibited longer-lasting activation of taxonomic relations and did not differ from control participants in terms of activation of thematic relations. These results suggest that taxonomic and thematic semantic knowledge are functionally and neuroanatomically distinct, with the temporo-parietal cortex playing a particularly important role in thematic semantics. PMID:22571932

  16. Comparing Spoken Language Treatments for Minimally Verbal Preschoolers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-01-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…

  17. Semantic Access to Embedded Words? Electrophysiological and Behavioral Evidence from Spanish and English

    ERIC Educational Resources Information Center

    Macizo, Pedro; Van Petten, Cyma; O'Rourke, Polly L.

    2012-01-01

    Many multisyllabic words contain shorter words that are not semantic units, like the CAP in HANDICAP and the DURA ("hard") in VERDURA ("vegetable"). The spaces between printed words identify word boundaries, but spurious identification of these embedded words is a potentially greater challenge for spoken language comprehension, a challenge that is…

  18. Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials.

    PubMed

    Freunberger, Dominik; Nieuwland, Mante S

    2016-09-01

    Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences ("Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day"). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Cognitive aging and hearing acuity: modeling spoken language comprehension.

    PubMed

    Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.

  20. TISK 1.0: An easy-to-use Python implementation of the time-invariant string kernel model of spoken word recognition.

    PubMed

    You, Heejo; Magnuson, James S

    2018-06-01

    This article describes a new Python distribution of TISK, the time-invariant string kernel model of spoken word recognition (Hannagan et al. in Frontiers in Psychology, 4, 563, 2013). TISK is an interactive-activation model similar to the TRACE model (McClelland & Elman in Cognitive Psychology, 18, 1-86, 1986), but TISK replaces most of TRACE's reduplicated, time-specific nodes with theoretically motivated time-invariant, open-diphone nodes. We discuss the utility of computational models as theory development tools, the relative merits of TISK as compared to other models, and the ways in which researchers might use this implementation to guide their own research and theory development. We describe a TISK model that includes features that facilitate in-line graphing of simulation results, integration with standard Python data formats, and graph and data export. The distribution can be downloaded from https://github.com/maglab-uconn/TISK1.0 .

  1. Is the time course of lexical activation and competition in spoken word recognition affected by adult aging? An event-related potential (ERP) study.

    PubMed

    Hunter, Cynthia R

    2016-10-01

    Adult aging is associated with decreased accuracy for recognizing speech, particularly in noisy backgrounds and for high neighborhood density words, which sound similar to many other words. In the current study, the time course of neighborhood density effects in young and older adults was compared using event-related potentials (ERP) and behavioral responses in a lexical decision task for spoken words and nonwords presented either in quiet or in noise. Target items sounded similar either to many or to few other words (neighborhood density) but were balanced for the frequency of their component sounds (phonotactic probability). Behavioral effects of density were similar across age groups, but the event-related potential effects of density differed as a function of age group. For young adults, density modulated the amplitude of both the N400 and the later P300 or late positive complex (LPC). For older adults, density modulated only the amplitude of the P300/LPC. Thus, spreading activation to the semantics of lexical neighbors, indexed by the N400 density effect, appears to be reduced or delayed in adult aging. In contrast, effects of density on P300/LPC amplitude were present in both age groups, perhaps reflecting attentional allocation to items that resemble few words in the mental lexicon. The results constitute the first evidence that ERP effects of neighborhood density are affected by adult aging. The age difference may reflect either a unitary density effect that is delayed by approximately 150ms in older adults, or multiple processes that are differentially affected by aging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Brain-to-text: decoding spoken phrases from phone representations in the brain.

    PubMed

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.

  3. Brain-to-text: decoding spoken phrases from phone representations in the brain

    PubMed Central

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech. PMID:26124702

  4. Modeling of Word Translation: Activation Flow from Concepts to Lexical Items

    ERIC Educational Resources Information Center

    Roelofs, Ardi; Dijkstra, Ton; Gerakaki, Svetlana

    2013-01-01

    Whereas most theoretical and computational models assume a continuous flow of activation from concepts to lexical items in spoken word production, one prominent model assumes that the mapping of concepts onto words happens in a discrete fashion (Bloem & La Heij, 2003). Semantic facilitation of context pictures on word translation has been taken to…

  5. From Thoughts to Words.

    ERIC Educational Resources Information Center

    Glaus, Marlene

    The activities presented in this book, designed to help children translate their thoughts into spoken and written words, can supplement an elementary teacher's own language arts lessons. Objectives for each activity are listed, with the general focus of the many oral activities being to develop a rich verbal background for future written work. The…

  6. (Almost) Word for Word: As Voice Recognition Programs Improve, Students Reap the Benefits

    ERIC Educational Resources Information Center

    Smith, Mark

    2006-01-01

    Voice recognition software is hardly new--attempts at capturing spoken words and turning them into written text have been available to consumers for about two decades. But what was once an expensive and highly unreliable tool has made great strides in recent years, perhaps most recognized in programs such as Nuance's Dragon NaturallySpeaking…

  7. Iconic Factors and Language Word Order

    ERIC Educational Resources Information Center

    Moeser, Shannon Dawn

    1975-01-01

    College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)

  8. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  9. Neural correlates of priming effects in children during spoken word processing with orthographic demands

    PubMed Central

    Cao, Fan; Khalid, Kainat; Zaveri, Rishi; Bolger, Donald J.; Bitan, Tali; Booth, James R.

    2009-01-01

    Priming effects were examined in 40 children (9 - 15 years old) using functional magnetic resonance imaging (fMRI). An orthographic judgment task required participants to determine if two sequentially presented spoken words had the same spelling for the rime. Four lexical conditions were designed: similar orthography and phonology (O+P+), similar orthography but different phonology (O+P−), similar phonology but different orthography (O−P+), and different orthography and phonology (O−P−). In left superior temporal gyrus, there was lower activation for targets in O+P+ than for those in O−P− and higher accuracy was correlated with stronger activation across all lexical conditions. These results provide evidence for phonological priming in children and greater elaboration of phonological representations in higher skill children, respectively. In left fusiform gyrus, there was lower activation for targets in O+P+ and O+P− than for those in O−P−, suggesting that visual similarity resulted in orthographic priming even with only auditory input. In left middle temporal gyrus, there was lower activation for targets in O+P+ than all other lexical conditions, suggesting that converging orthographic and phonological information resulted in a weaker influence on semantic representations. In addition, higher reading skill was correlated with weaker activation in left middle temporal gyrus across all lexical conditions, suggesting that higher skill children rely to a lesser degree on semantics as a compensatory mechanism. Finally, conflict effects but not priming effects were observed in left inferior frontal gyrus, suggesting that this region is involved in resolving conflicting orthographic and phonological information but not in perceptual priming. PMID:19665784

  10. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    ERIC Educational Resources Information Center

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  11. Phonological and Semantic Knowledge Are Causal Influences on Learning to Read Words in Chinese

    ERIC Educational Resources Information Center

    Zhou, Lulin; Duff, Fiona J.; Hulme, Charles

    2015-01-01

    We report a training study that assesses whether teaching the pronunciation and meaning of spoken words improves Chinese children's subsequent attempts to learn to read the words. Teaching the pronunciations of words helps children to learn to read those same words, and teaching the pronunciations and meanings improves learning still further.…

  12. Impact of Diglossia on Word and Non-word Repetition among Language Impaired and Typically Developing Arabic Native Speaking Children.

    PubMed

    Saiegh-Haddad, Elinor; Ghawi-Dakwar, Ola

    2017-01-01

    The study tested the impact of the phonological and lexical distance between a dialect of Palestinian Arabic spoken in the north of Israel (SpA) and Modern Standard Arabic (StA or MSA) on word and non-word repetition in children with specific language impairment (SLI) and in typically developing (TD) age-matched controls. Fifty kindergarten children (25 SLI, 25 TD; mean age 5;5) and fifty first grade children (25 SLI, 25 TD; mean age 6:11) were tested with a repetition task for 1-4 syllable long real words and pseudo words; Items varied systematically in whether each encoded a novel StA phoneme or not, namely a phoneme that is only used in StA but not in the spoken dialect targeted. Real words also varied in whether they were lexically novel, meaning whether the word is used only in StA, but not in SpA. SLI children were found to significantly underperform TD children on all repetition tasks indicating a general phonological memory deficit. More interesting for the current investigation is the observed strong and consistent effect of phonological novelty on word and non-word repetition in SLI and TD children, with a stronger effect observed in SLI. In contrast with phonological novelty, the effect of lexical novelty on word repetition was limited and it did not interact with group. The results are argued to reflect the role of linguistic distance in phonological memory for novel linguistic units in Arabic SLI and, hence, to support a specific Linguistic Distance Hypothesis of SLI in a diglossic setting. The implications of the findings for assessment, diagnosis and intervention with Arabic speaking children with SLI are discussed.

  13. Auditory semantic processing in dichotic listening: effects of competing speech, ear of presentation, and sentential bias on N400s to spoken words in context.

    PubMed

    Carey, Daniel; Mercure, Evelyne; Pizzioli, Fabrizio; Aydelott, Jennifer

    2014-12-01

    The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A randomized comparison of the effect of two prelinguistic communication interventions on the acquisition of spoken communication in preschoolers with ASD.

    PubMed

    Yoder, Paul; Stone, Wendy L

    2006-08-01

    This randomized group experiment compared the efficacy of 2 communication interventions (Responsive Education and Prelinguistic Milieu Teaching [RPMT] and the Picture Exchange Communication System [PECS]) on spoken communication in 36 preschoolers with autism spectrum disorders (ASD). Each treatment was delivered to children for a maximum total of 24 hr over a 6-month period. Spoken communication was assessed in a rigorous test of generalization at pretreatment, posttreatment, and 6-month follow-up periods. PECS was more successful than RPMT in increasing the number of nonimitative spoken communication acts and the number of different nonimitative words used at the posttreatment period. Considering growth over all 3 measurement periods, an exploratory analysis showed that growth rate of the number of different nonimitative words was faster in the PECS group than in the RPMT group for children who began treatment with relatively high object exploration. In contrast, analogous slopes were steeper in the RPMT group than in the PECS group for children who began treatment with relatively low object exploration.

  15. Locus of Word Frequency Effects in Spelling to Dictation: Still at the Orthographic Level!

    ERIC Educational Resources Information Center

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-01-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological…

  16. AUDITORY DISCRIMINATION TRAINING IN THE DEVELOPMENT OF WORD ANALYSIS SKILLS.

    ERIC Educational Resources Information Center

    COLEMAN, JAMES C.; MCNEIL, JOHN D.

    THE HYPOTHESIS THAT CHILDREN WHO ARE TAUGHT TO HEAR AND DESIGNATE SEPARATE SOUNDS IN SPOKEN WORDS WILL ACHIEVE GREATER SUCCESS IN LEARNING TO ANALYZE PRINTED WORDS WAS TESTED. THE SUBJECTS WERE 90 KINDERGARTEN CHILDREN, PREDOMINATELY MEXICAN-AMERICANS AND NEGROES. CHILDREN WERE RANDOMLY ASSIGNED TO ONE OF THREE TREATMENTS, EACH OF 3-WEEKS DURATION…

  17. The Influence of the Phonological Neighborhood Clustering Coefficient on Spoken Word Recognition

    ERIC Educational Resources Information Center

    Chan, Kit Ying; Vitevitch, Michael S.

    2009-01-01

    Clustering coefficient--a measure derived from the new science of networks--refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words "bat", "hat", and "can", all of which are neighbors of the word "cat"; the words "bat" and…

  18. Reducing Cancer and Cancer Disparities: Lessons From a Youth-Generated Diabetes Prevention Campaign.

    PubMed

    Schillinger, Dean; Ling, Pamela M; Fine, Sarah; Boyer, Cherrie B; Rogers, Elizabeth; Vargas, Roberto Ariel; Bibbins-Domingo, Kirsten; Chou, Wen-Ying Sylvia

    2017-09-01

    Adolescence and young adulthood, a period essential for determining exposures over the life-course, is an ideal time to intervene to lower cancer risk. This demographic group can be viewed as both the target audience and generator of messages for cancer prevention, such as skin cancer, obesity-, tobacco-, and human papillomavirus-related cancers. The purpose of this paper is to encourage innovative health communications that target youth; youth behavior; and the structural, environmental, and social determinants of youth behavior as critical areas of focus for cancer prevention and disparities reduction. The authors describe the rationale, processes, products, and early impacts of an award-winning youth diabetes prevention communication campaign model (The Bigger Picture) that harnesses spoken-word messages in school-based and social media presentations. The campaign supports minority adolescent and young adult artists to create content that aligns with values held closely by youth-values likely to resonate and affect change, such as defiance against authority, inclusion, and social justice. This campaign can be leveraged to prevent obesity, which is a cancer risk factor. Then, the authors propose concrete ways that The Bigger Picture's pedagogical model could be adapted for broader cancer prevention messaging for youth of color and youth stakeholders regarding tobacco-related cancers, skin cancers, and human papillomavirus-related cancers. The goal is to demonstrate how a youth-generated and youth-targeted prevention campaign can: (1) reframe conversations about cancer prevention, (2) increase awareness that cancer prevention is about social justice and health equity, and (3) catalyze action to change social norms and confront the social and environmental drivers of cancer disparities. Copyright © 2017 American Journal of Preventive Medicine. All rights reserved.

  19. Research on Spoken Dialogue Systems

    NASA Technical Reports Server (NTRS)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  20. Immediate effects of form-class constraints on spoken word recognition

    PubMed Central

    Magnuson, James S.; Tanenhaus, Michael K.; Aslin, Richard N.

    2008-01-01

    In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar “nouns” and “adjectives” did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration. PMID:18675408

  1. Exploratory analysis of real personal emergency response call conversations: considerations for personal emergency response spoken dialogue systems.

    PubMed

    Young, Victoria; Rochon, Elizabeth; Mihailidis, Alex

    2016-11-14

    The purpose of this study was to derive data from real, recorded, personal emergency response call conversations to help improve the artificial intelligence and decision making capability of a spoken dialogue system in a smart personal emergency response system. The main study objectives were to: develop a model of personal emergency response; determine categories for the model's features; identify and calculate measures from call conversations (verbal ability, conversational structure, timing); and examine conversational patterns and relationships between measures and model features applicable for improving the system's ability to automatically identify call model categories and predict a target response. This study was exploratory and used mixed methods. Personal emergency response calls were pre-classified according to call model categories identified qualitatively from response call transcripts. The relationships between six verbal ability measures, three conversational structure measures, two timing measures and three independent factors: caller type, risk level, and speaker type, were examined statistically. Emergency medical response services were the preferred response for the majority of medium and high risk calls for both caller types. Older adult callers mainly requested non-emergency medical service responders during medium risk situations. By measuring the number of spoken words-per-minute and turn-length-in-words for the first spoken utterance of a call, older adult and care provider callers could be identified with moderate accuracy. Average call taker response time was calculated using the number-of-speaker-turns and time-in-seconds measures. Care providers and older adults used different conversational strategies when responding to call takers. The words 'ambulance' and 'paramedic' may hold different latent connotations for different callers. The data derived from the real personal emergency response recordings may help a spoken dialogue system

  2. Newly learned word forms are abstract and integrated immediately after acquisition

    PubMed Central

    Kapnoula, Efthymia C.; McMurray, Bob

    2015-01-01

    A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35–39, 2007; Gaskell & Dumay, Cognition, 89, 105–132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85–99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation. PMID:26202702

  3. When semantics aids phonology: A processing advantage for iconic word forms in aphasia.

    PubMed

    Meteyard, Lotte; Stoppard, Emily; Snudden, Dee; Cappa, Stefano F; Vigliocco, Gabriella

    2015-09-01

    Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. "moo", "splash"). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage is due to a stronger connection between semantic information and phonological forms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Impact of Diglossia on Word and Non-word Repetition among Language Impaired and Typically Developing Arabic Native Speaking Children

    PubMed Central

    Saiegh-Haddad, Elinor; Ghawi-Dakwar, Ola

    2017-01-01

    The study tested the impact of the phonological and lexical distance between a dialect of Palestinian Arabic spoken in the north of Israel (SpA) and Modern Standard Arabic (StA or MSA) on word and non-word repetition in children with specific language impairment (SLI) and in typically developing (TD) age-matched controls. Fifty kindergarten children (25 SLI, 25 TD; mean age 5;5) and fifty first grade children (25 SLI, 25 TD; mean age 6:11) were tested with a repetition task for 1–4 syllable long real words and pseudo words; Items varied systematically in whether each encoded a novel StA phoneme or not, namely a phoneme that is only used in StA but not in the spoken dialect targeted. Real words also varied in whether they were lexically novel, meaning whether the word is used only in StA, but not in SpA. SLI children were found to significantly underperform TD children on all repetition tasks indicating a general phonological memory deficit. More interesting for the current investigation is the observed strong and consistent effect of phonological novelty on word and non-word repetition in SLI and TD children, with a stronger effect observed in SLI. In contrast with phonological novelty, the effect of lexical novelty on word repetition was limited and it did not interact with group. The results are argued to reflect the role of linguistic distance in phonological memory for novel linguistic units in Arabic SLI and, hence, to support a specific Linguistic Distance Hypothesis of SLI in a diglossic setting. The implications of the findings for assessment, diagnosis and intervention with Arabic speaking children with SLI are discussed. PMID:29213248

  5. Children show right-lateralized effects of spoken word-form learning

    PubMed Central

    Nora, Anni; Karvonen, Leena; Renvall, Hanna; Parviainen, Tiina; Kim, Jeong-Young; Service, Elisabet; Salmelin, Riitta

    2017-01-01

    It is commonly thought that phonological learning is different in young children compared to adults, possibly due to the speech processing system not yet having reached full native-language specialization. However, the neurocognitive mechanisms of phonological learning in children are poorly understood. We employed magnetoencephalography (MEG) to track cortical correlates of incidental learning of meaningless word forms over two days as 6–8-year-olds overtly repeated them. Native (Finnish) pseudowords were compared with words of foreign sound structure (Korean) to investigate whether the cortical learning effects would be more dependent on previous proficiency in the language rather than maturational factors. Half of the items were encountered four times on the first day and once more on the following day. Incidental learning of these recurring word forms manifested as improved repetition accuracy and a correlated reduction of activation in the right superior temporal cortex, similarly for both languages and on both experimental days, and in contrast to a salient left-hemisphere emphasis previously reported in adults. We propose that children, when learning new word forms in either native or foreign language, are not yet constrained by left-hemispheric segmental processing and established sublexical native-language representations. Instead, they may rely more on supra-segmental contours and prosody. PMID:28158201

  6. Children show right-lateralized effects of spoken word-form learning.

    PubMed

    Nora, Anni; Karvonen, Leena; Renvall, Hanna; Parviainen, Tiina; Kim, Jeong-Young; Service, Elisabet; Salmelin, Riitta

    2017-01-01

    It is commonly thought that phonological learning is different in young children compared to adults, possibly due to the speech processing system not yet having reached full native-language specialization. However, the neurocognitive mechanisms of phonological learning in children are poorly understood. We employed magnetoencephalography (MEG) to track cortical correlates of incidental learning of meaningless word forms over two days as 6-8-year-olds overtly repeated them. Native (Finnish) pseudowords were compared with words of foreign sound structure (Korean) to investigate whether the cortical learning effects would be more dependent on previous proficiency in the language rather than maturational factors. Half of the items were encountered four times on the first day and once more on the following day. Incidental learning of these recurring word forms manifested as improved repetition accuracy and a correlated reduction of activation in the right superior temporal cortex, similarly for both languages and on both experimental days, and in contrast to a salient left-hemisphere emphasis previously reported in adults. We propose that children, when learning new word forms in either native or foreign language, are not yet constrained by left-hemispheric segmental processing and established sublexical native-language representations. Instead, they may rely more on supra-segmental contours and prosody.

  7. Neurophysiological Evidence for Underspecified Lexical Representations: Asymmetries with Word Initial Variations

    ERIC Educational Resources Information Center

    Friedrich, Claudia K.; Lahiri, Aditi; Eulitz, Carsten

    2008-01-01

    How does the mental lexicon cope with phonetic variants in recognition of spoken words? Using a lexical decision task with and without fragment priming, the authors compared the processing of German words and pseudowords that differed only in the place of articulation of the initial consonant (place). Across both experiments, event-related brain…

  8. The role of syllabic structure in French visual word recognition.

    PubMed

    Rouibah, A; Taft, M

    2001-03-01

    Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.

  9. Using Key Part-of-Speech Analysis to Examine Spoken Discourse by Taiwanese EFL Learners

    ERIC Educational Resources Information Center

    Lin, Yen-Liang

    2015-01-01

    This study reports on a corpus analysis of samples of spoken discourse between a group of British and Taiwanese adolescents, with the aim of exploring the statistically significant differences in the use of grammatical categories between the two groups of participants. The key word method extended to a part-of-speech level using the web-based…

  10. The Exception Does Not Rule: Attention Constrains Form Preparation in Word Production

    ERIC Educational Resources Information Center

    O'Séaghdha, Pádraig G.; Frazer, Alexandra K.

    2014-01-01

    Form preparation in word production, the benefit of exploiting a useful common sound (such as the first phoneme) of iteratively spoken small groups of words, is notoriously fastidious, exhibiting a seemingly categorical, all-or-none character and a corresponding susceptibility to "killers" of preparation. In particular, the presence of a…

  11. The Self-Organization of a Spoken Word

    PubMed Central

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  12. An Analysis of the Most Frequently Occurring Words in Spoken American English.

    ERIC Educational Resources Information Center

    Plant, Geoff

    1999-01-01

    A study analyzed frequency of occurrence of consonants, vowels, and diphthongs, syllabic structure of the words, and segmental structure of the 311 monosyllabic words of 500 words that occur most frequently in English. Three mannerisms of articulation accounted for nearly 75 percent of all consonant occurrences: stops, semi-vowels, and nasals.…

  13. Effects of Aging and Noise on Real-Time Spoken Word Recognition: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.

    2011-01-01

    Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…

  14. Project ASPIRE: Spoken Language Intervention Curriculum for Parents of Low-socioeconomic Status and Their Deaf and Hard-of-Hearing Children.

    PubMed

    Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen

    2016-02-01

    To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged < 4.5 years, hearing loss of ≥ 30 dB, between 500 and 4000 Hz, using at least one amplification device with adequate amplification (hearing aid, cochlear implant, osseo-integrated device). Behavioral. Caregiver-directed educational intervention curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.

  15. Perception and Lateralization of Spoken Emotion by Youths with High-Functioning Forms of Autism

    ERIC Educational Resources Information Center

    Baker, Kimberly F.; Montgomery, Allen A.; Abramson, Ruth

    2010-01-01

    The perception and the cerebral lateralization of spoken emotions were investigated in children and adolescents with high-functioning forms of autism (HFFA), and age-matched typically developing controls (TDC). A dichotic listening task using nonsense passages was used to investigate the recognition of four emotions: happiness, sadness, anger, and…

  16. Are Phonological Representations of Printed and Spoken Language Isomorphic? Evidence from the Restrictions on Unattested Onsets

    ERIC Educational Resources Information Center

    Berent, Iris

    2008-01-01

    Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether…

  17. Estimating when and how words are acquired: A natural experiment on the development of the mental lexicon

    PubMed Central

    Auer, Edward T.; Bernstein, Lynne E.

    2009-01-01

    Purpose Sensitivity of subjective estimates of Age of Acquisition (AOA) and Acquisition Channel (AC) (printed, spoken, signed) to differences in word exposure within and between populations that differ dramatically in perceptual experience was examined. Methods 50 participants with early-onset deafness and 50 with normal hearing rated 175 words in terms of subjective age-of-acquisition and acquisition channel. Additional data were collected using a standardized test of reading and vocabulary. Results Deaf participants rated words as learned later (M = 10 years) than did participants with normal hearing (M = 8.5 years) (F(1,99) = 28.59; p < .01). Group-averaged item ratings of AOA were highly correlated across the groups (r = .971), and with normative order of acquisition (deaf: r = .950 and hearing: r = .946). The groups differed in their ratings of acquisition channel: Hearing: printed = 30%, spoken = 70%, signed =0%; Deaf: printed = 45%, spoken = 38%, signed = 17%. Conclusions Subjective AOA and AC measures are sensitive to between- and within-group differences in word experience. The results demonstrate that these subjective measures can be applied as reliable proxies for direct measures of lexical development in studies of lexical knowledge in adults with prelingual onset deafness. PMID:18506048

  18. Neural Processing of Spoken Words in Specific Language Impairment and Dyslexia

    ERIC Educational Resources Information Center

    Helenius, Paivi; Parviainen, Tiina; Paetau, Ritva; Salmelin, Riitta

    2009-01-01

    Young adults with a history of specific language impairment (SLI) differ from reading-impaired (dyslexic) individuals in terms of limited vocabulary and poor verbal short-term memory. Phonological short-term memory has been shown to play a significant role in learning new words. We investigated the neural signatures of auditory word recognition…

  19. Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals

    ERIC Educational Resources Information Center

    Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.

    2017-01-01

    Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…

  20. Integration of Pragmatic and Phonetic Cues in Spoken Word Recognition

    PubMed Central

    Rohde, Hannah; Ettlinger, Marc

    2015-01-01

    Although previous research has established that multiple top-down factors guide the identification of words during speech processing, the ultimate range of information sources that listeners integrate from different levels of linguistic structure is still unknown. In a set of experiments, we investigate whether comprehenders can integrate information from the two most disparate domains: pragmatic inference and phonetic perception. Using contexts that trigger pragmatic expectations regarding upcoming coreference (expectations for either he or she), we test listeners' identification of phonetic category boundaries (using acoustically ambiguous words on the/hi/∼/∫i/continuum). The results indicate that, in addition to phonetic cues, word recognition also reflects pragmatic inference. These findings are consistent with evidence for top-down contextual effects from lexical, syntactic, and semantic cues, but they extend this previous work by testing cues at the pragmatic level and by eliminating a statistical-frequency confound that might otherwise explain the previously reported results. We conclude by exploring the time-course of this interaction and discussing how different models of cue integration could be adapted to account for our results. PMID:22250908

  1. Immediate lexical integration of novel word forms

    PubMed Central

    Kapnoula, Efthymia C.; McMurray, Bob

    2014-01-01

    It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003) and meaning (Leach & Samuel, 2007) to establish this integration. In two experiments we test the necessity of these factors by examining the inhibition between newly learned items and familiar words immediately after learning. Participants learned a set of nonwords without meanings in active (Exp 1) or passive (Exp 2) exposure paradigms. After training, participants performed a visual world paradigm task to assess inhibition from these newly learned items. An analysis of participants’ fixations suggested that the newly learned words were able to engage in competition with known words without any consolidation. PMID:25460382

  2. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2012-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…

  3. Word Length and Lexical Activation: Longer Is Better

    ERIC Educational Resources Information Center

    Pitt, Mark A.; Samuel, Arthur G.

    2006-01-01

    Many models of spoken word recognition posit the existence of lexical and sublexical representations, with excitatory and inhibitory mechanisms used to affect the activation levels of such representations. Bottom-up evidence provides excitatory input, and inhibition from phonetically similar representations leads to lexical competition. In such a…

  4. Spoken Records. Third Edition.

    ERIC Educational Resources Information Center

    Roach, Helen

    Surveying 75 years of accomplishment in the field of spoken recording, this reference work critically evaluates commercially available recordings selected for excellence of execution, literary or historical merit, interest, and entertainment value. Some types of spoken records included are early recording, documentaries, lectures, interviews,…

  5. Test-retest reliability of eye tracking in the visual world paradigm for the study of real-time spoken word recognition.

    PubMed

    Farris-Trimble, Ashley; McMurray, Bob

    2013-08-01

    Researchers have begun to use eye tracking in the visual world paradigm (VWP) to study clinical differences in language processing, but the reliability of such laboratory tests has rarely been assessed. In this article, the authors assess test-retest reliability of the VWP for spoken word recognition. Methods Participants performed an auditory VWP task in repeated sessions and a visual-only VWP task in a third session. The authors performed correlation and regression analyses on several parameters to determine which reflect reliable behavior and which are predictive of behavior in later sessions. Results showed that the fixation parameters most closely related to timing and degree of fixations were moderately-to-strongly correlated across days, whereas the parameters related to rate of increase or decrease of fixations to particular items were less strongly correlated. Moreover, when including factors derived from the visual-only task, the performance of the regression model was at least moderately correlated with Day 2 performance on all parameters ( R > .30). The VWP is stable enough (with some caveats) to serve as an individual measure. These findings suggest guidelines for future use of the paradigm and for areas of improvement in both methodology and analysis.

  6. Phonemic carryover perseveration: word blends.

    PubMed

    Buckingham, Hugh W; Christman, Sarah S

    2004-11-01

    This article will outline and describe the aphasic disorder of recurrent perseveration and will demonstrate how it interacts with the retrieval and production of spoken words in the language of fluent aphasic patients who have sustained damage to the left (dominant) posterior temporoparietal lobe. We will concentrate on the various kinds of sublexical segmental perseverations (the so-called phonemic carryovers of Santo Pietro and Rigrodsky) that most often play a role in the generation of word blendings. We will show how perseverative blends allow the clinician to better understand the dynamics of word and syllable production in fluent aphasia by scrutinizing the "onset/rime" and "onset/superrime" constituents of monosyllabic and polysyllabic words, respectively. We will demonstrate to the speech language pathologist the importance of the trochee stress pattern and the possibility that its metrical template may constitute a structural unit that can be perseverated.

  7. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    PubMed Central

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  8. Immediate lexical integration of novel word forms.

    PubMed

    Kapnoula, Efthymia C; Packard, Stephanie; Gupta, Prahlad; McMurray, Bob

    2015-01-01

    It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003) and meaning (Leach & Samuel, 2007) to establish this integration. In two experiments we test the necessity of these factors by examining the inhibition between newly learned items and familiar words immediately after learning. Participants learned a set of nonwords without meanings in active (Experiment 1) or passive (Experiment 2) exposure paradigms. After training, participants performed a visual world paradigm task to assess inhibition from these newly learned items. An analysis of participants' fixations suggested that the newly learned words were able to engage in competition with known words without any consolidation. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Word length and lexical activation: longer is better.

    PubMed

    Pitt, Mark A; Samuel, Arthur G

    2006-10-01

    Many models of spoken word recognition posit the existence of lexical and sublexical representations, with excitatory and inhibitory mechanisms used to affect the activation levels of such representations. Bottom-up evidence provides excitatory input, and inhibition from phonetically similar representations leads to lexical competition. In such a system, long words should produce stronger lexical activation than short words, for 2 reasons: Long words provide more bottom-up evidence than short words, and short words are subject to greater inhibition due to the existence of more similar words. Four experiments provide evidence for this view. In addition, reaction-time-based partitioning of the data shows that long words generate greater activation that is available both earlier and for a longer time than is the case for short words. As a result, lexical influences on phoneme identification are extremely robust for long words but are quite fragile and condition-dependent for short words. Models of word recognition must consider words of all lengths to capture the true dynamics of lexical activation. Copyright 2006 APA.

  10. Verbal Word Choice of Effective Reading Teachers

    ERIC Educational Resources Information Center

    Moran, Kelly A.

    2013-01-01

    Humans are fragile beings easily influenced by the verbal behaviors of others. Spoken words can have a multitude of effects on an individual, and the phrases and statements teachers use in their classrooms on a daily basis have the potential to be either detrimental or inspirational. As increasing numbers of students arrive at schools from broken…

  11. Independent Effects of Orthographic and Phonological Facilitation on Spoken Word Production in Mandarin

    ERIC Educational Resources Information Center

    Zhang, Qingfang; Chen, Hsuan-Chih; Weekes, Brendan Stuart; Yang, Yufang

    2009-01-01

    A picture-word interference paradigm with visually presented distractors was used to investigate the independent effects of orthographic and phonological facilitation on Mandarin monosyllabic word production. Both the stimulus-onset asynchrony (SOA) and the picture-word relationship along different lexical dimensions were varied. We observed a…

  12. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    PubMed Central

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2014-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word recognition. The current study examined the effects of handwriting on a series of lexical variables thought to influence bottom-up and top-down processing, including word frequency, regularity, bidirectional consistency, and imageability. The results suggest that the natural physical ambiguity of handwritten stimuli forces a greater reliance on top-down processes, because almost all effects were magnified, relative to conditions with computer print. These findings suggest that processes of word perception naturally adapt to handwriting, compensating for physical ambiguity by increasing top-down feedback. PMID:20695708

  13. Spoken Word Recognition and Serial Recall of Words from Components in the Phonological Network

    ERIC Educational Resources Information Center

    Siew, Cynthia S. Q.; Vitevitch, Michael S.

    2016-01-01

    Network science uses mathematical techniques to study complex systems such as the phonological lexicon (Vitevitch, 2008). The phonological network consists of a "giant component" (the largest connected component of the network) and "lexical islands" (smaller groups of words that are connected to each other, but not to the giant…

  14. Tone of voice guides word learning in informative referential contexts.

    PubMed

    Reinisch, Eva; Jesse, Alexandra; Nygaard, Lynne C

    2013-06-01

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., "daxen") spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.

  15. Tone of voice guides word learning in informative referential contexts

    PubMed Central

    Reinisch, Eva; Jesse, Alexandra; Nygaard, Lynne C.

    2012-01-01

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker’s tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives’ meanings, and, even in the absence of informative ToV, generalise them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning. PMID:23134484

  16. Youth Participatory Action Research and Educational Transformation: The Potential of Intertextuality as a Methodological Tool

    ERIC Educational Resources Information Center

    Bertrand, Melanie

    2016-01-01

    In this article, Melanie Bertrand explores the potential of using the concept of intertextuality--which captures the way snippets of written or spoken text from one source become incorporated into other sources--in the study and practice of youth participatory action research (YPAR). Though this collective and youth-centered form of research…

  17. Neighborhoods of Words in the Mental Lexicon. Research on Speech Perception. Technical Report No. 6.

    ERIC Educational Resources Information Center

    Luce, Paul A.

    A study employed computational and experimental methods to address a number of issues related to the representation and structural organization of spoken words in the mental lexicon. Using a computerized lexicon consisting of phonetic transcriptions of 20,000 words, "similarity neighborhoods" for each of the transcriptions were computed.…

  18. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  19. A database for semantic, grammatical, and frequency properties of the first words acquired by Italian children.

    PubMed

    Rinaldi, Pasquale; Barca, Laura; Burani, Cristina

    2004-08-01

    The CFVlexvar.xls database includes imageability, frequency, and grammatical properties of the first words acquired by Italian children. For each of 519 words that are known by children 18-30 months of age (taken from Caselli & Casadio's, 1995, Italian version of the MacArthur Communicative Development Inventory), new values of imageability are provided and values for age of acquisition, child written frequency, and adult written and spoken frequency are included. In this article, correlations among the variables are discussed and the words are grouped into grammatical categories. The results show that words acquired early have imageable referents, are frequently used in the texts read and written by elementary school children, and are frequent in adult written and spoken language. Nouns are acquired earlier and are more imageable than both verbs and adjectives. The composition in grammatical categories of the child's first vocabulary reflects the composition of adult vocabulary. The full set of these norms can be downloaded from www.psychonomic.org/archive/.

  20. Effects of Word Frequency and Transitional Probability on Word Reading Durations of Younger and Older Speakers.

    PubMed

    Moers, Cornelia; Meyer, Antje; Janse, Esther

    2017-06-01

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups-younger children (8-12 years), adolescents (12-18 years) and older (62-95 years) Dutch speakers-show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.

  1. Recurrent Word Combinations in EAP Test-Taker Writing: Differences between High- and Low-Proficiency Levels

    ERIC Educational Resources Information Center

    Appel, Randy; Wood, David

    2016-01-01

    The correct use of frequently occurring word combinations represents an important part of language proficiency in spoken and written discourse. This study investigates the use of English-language recurrent word combinations in low-level and high-level L2 English academic essays sourced from the Canadian Academic English Language (CAEL) assessment.…

  2. Speech Perception Engages a General Timer: Evidence from a Divided Attention Word Identification Task

    ERIC Educational Resources Information Center

    Casini, Laurence; Burle, Boris; Nguyen, Noel

    2009-01-01

    Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control…

  3. The Influence of Topic Status on Written and Spoken Sentence Production

    PubMed Central

    Cowles, H. Wind; Ferreira, Victor S.

    2012-01-01

    Four experiments investigate the influence of topic status and givenness on how speakers and writers structure sentences. The results of these experiments show that when a referent is previously given, it is more likely to be produced early in both sentences and word lists, confirming prior work showing that givenness increases the accessibility of given referents. When a referent is previously given and assigned topic status, it is even more likely to be produced early in a sentence, but not in a word list. Thus, there appears to be an early mention advantage for topics that is present in both written and spoken modalities, but is specific to sentence production. These results suggest that information-structure constructs like topic exert an influence that is not based only on increased accessibility, but also reflects mapping to syntactic structure during sentence production. PMID:22408281

  4. New Names for Known Things: On the Association of Novel Word Forms with Existing Semantic Information

    ERIC Educational Resources Information Center

    Dobel, Christian; Junghofer, Markus; Breitenstein, Caterina; Klauke, Benedikt; Knecht, Stefan; Pantev, Christo; Zwitserlood, Pienie

    2010-01-01

    The plasticity of the adult memory network for integrating novel word forms (lexemes) was investigated with whole-head magnetoencephalography (MEG). We showed that spoken word forms of an (artificial) foreign language are integrated rapidly and successfully into existing lexical and conceptual memory networks. The new lexemes were learned in an…

  5. Evaluation of the Kts'iìhtła (“We Light the Fire”) Project: building resiliency and connections through strengths-based creative arts programming for Indigenous youth

    PubMed Central

    Fanian, Sahar; Young, Stephanie K.; Mantla, Mason; Daniels, Anita; Chatwood, Susan

    2015-01-01

    Background The creative arts – music, film, visual arts, dance, theatre, spoken word, literature, among others – are gradually being recognised as effective health promotion tools to empower, engage and improve the health and well-being in Indigenous youth communities. Arts-based programming has also had positive impacts in promoting health, mental wellness and resiliency amongst youth. However, often times the impacts and successes of such programming are not formally reported on, as reflected by the paucity of evaluations and reports in the literature. Objective The objective of this study was to evaluate a creative arts workshop for Tłįchǫ youth where youth explored critical community issues and found solutions together using the arts. We sought to identify the workshop’s areas of success and challenge. Ultimately, our goal is to develop a community-led, youth-driven model to strengthen resiliency through youth engagement in the arts in circumpolar regions. Design Using a mixed-methods approach, we conducted observational field notes, focus groups, questionnaires, and reflective practice to evaluate the workshop. Four youth and five facilitators participated in this process overall. Results Youth reported gaining confidence and new skills, both artistic and personal. Many youth found the workshop to be engaging, enjoyable and culturally relevant. Youth expressed an interest in continuing their involvement with the arts and spreading their messages through art to other youth and others in their communities. Conclusions Engagement and participation in the arts have the potential to build resiliency, form relationships, and stimulate discussions for community change amongst youth living in the North. PMID:26265489

  6. Evaluation of the Kòts'iìhtła ("We Light the Fire") Project: building resiliency and connections through strengths-based creative arts programming for Indigenous youth.

    PubMed

    Fanian, Sahar; Young, Stephanie K; Mantla, Mason; Daniels, Anita; Chatwood, Susan

    2015-01-01

    The creative arts - music, film, visual arts, dance, theatre, spoken word, literature, among others - are gradually being recognised as effective health promotion tools to empower, engage and improve the health and well-being in Indigenous youth communities. Arts-based programming has also had positive impacts in promoting health, mental wellness and resiliency amongst youth. However, often times the impacts and successes of such programming are not formally reported on, as reflected by the paucity of evaluations and reports in the literature. The objective of this study was to evaluate a creative arts workshop for Tłįchǫ youth where youth explored critical community issues and found solutions together using the arts. We sought to identify the workshop's areas of success and challenge. Ultimately, our goal is to develop a community-led, youth-driven model to strengthen resiliency through youth engagement in the arts in circumpolar regions. Using a mixed-methods approach, we conducted observational field notes, focus groups, questionnaires, and reflective practice to evaluate the workshop. Four youth and five facilitators participated in this process overall. Youth reported gaining confidence and new skills, both artistic and personal. Many youth found the workshop to be engaging, enjoyable and culturally relevant. Youth expressed an interest in continuing their involvement with the arts and spreading their messages through art to other youth and others in their communities. Engagement and participation in the arts have the potential to build resiliency, form relationships, and stimulate discussions for community change amongst youth living in the North.

  7. African American English and Spelling: How Do Second Graders Spell Dialect-Sensitive Features of Words?

    ERIC Educational Resources Information Center

    Patton-Terry, Nicole; Connor, Carol

    2010-01-01

    This study explored the spelling skills of African American second graders who produced African American English (AAE) features in speech. The children (N = 92), who varied in spoken AAE use and word reading skills, were asked to spell words that contained phonological and morphological dialect-sensitive (DS) features that can vary between AAE and…

  8. Zipf's Law for Word Frequencies: Word Forms versus Lemmas in Long Texts.

    PubMed

    Corral, Álvaro; Boleda, Gemma; Ferrer-i-Cancho, Ramon

    2015-01-01

    Zipf's law is a fundamental paradigm in the statistics of written and spoken natural language as well as in other communication systems. We raise the question of the elementary units for which Zipf's law should hold in the most natural way, studying its validity for plain word forms and for the corresponding lemma forms. We analyze several long literary texts comprising four languages, with different levels of morphological complexity. In all cases Zipf's law is fulfilled, in the sense that a power-law distribution of word or lemma frequencies is valid for several orders of magnitude. We investigate the extent to which the word-lemma transformation preserves two parameters of Zipf's law: the exponent and the low-frequency cut-off. We are not able to demonstrate a strict invariance of the tail, as for a few texts both exponents deviate significantly, but we conclude that the exponents are very similar, despite the remarkable transformation that going from words to lemmas represents, considerably affecting all ranges of frequencies. In contrast, the low-frequency cut-offs are less stable, tending to increase substantially after the transformation.

  9. Treating youths with selective mutism with an alternating design of exposure-based practice and contingency management.

    PubMed

    Vecchio, Jennifer; Kearney, Christopher A

    2009-12-01

    Selective mutism is a severe childhood disorder involving failure to speak in public situations in which speaking is expected. The present study examined 9 youths with selective mutism treated with child-focused, exposure-based practices and parent-focused contingency management via an alternating treatments design. Broadband measures of functioning were employed, but particular focus was made on behavioral assessment of words spoken audibly and daily in public situations. Treatment ranged from 8 to 32 sessions and resulted in positive end-state functioning for 8 of 9 participants. Broader analyses indicated greater effectiveness for exposure-based practice than contingency management. The results support recent case reports of behavioral treatment for this population but in more rigorous fashion. Clinical and research challenges are discussed, including caveats about length and intensity of treatment for this population and need to develop standardized daily measures.

  10. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    ERIC Educational Resources Information Center

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  11. Early Word Comprehension in Infants: Replication and Extension

    PubMed Central

    Bergelson, Elika; Swingley, Daniel

    2014-01-01

    A handful of recent experimental reports have shown that infants of 6 to 9 months know the meanings of some common words. Here, we replicate and extend these findings. With a new set of items, we show that when young infants (age 6-16 months, n=49) are presented with side-by-side video clips depicting various common early words, and one clip is named in a sentence, they look at the named video at above-chance rates. We demonstrate anew that infants understand common words by 6-9 months, and that performance increases substantially around 14 months. The results imply that 6-9 month olds’ failure to understand words not referring to objects (verbs, adjectives, performatives) in a similar prior study is not attributable to the use of dynamic video depictions. Thus, 6-9 month olds’ experience of spoken language includes some understanding of common words for concrete objects, but relatively impoverished comprehension of other words. PMID:26664329

  12. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    ERIC Educational Resources Information Center

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  13. The Effect of Background Noise on the Word Activation Process in Nonnative Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Scharenborg, Odette; Coumans, Juul M. J.; van Hout, Roeland

    2018-01-01

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on…

  14. Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity

    ERIC Educational Resources Information Center

    Wong, Miranda Kit-Yi; So, Wing Chee

    2016-01-01

    This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…

  15. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    PubMed

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  16. Word meaning and the control of eye fixation: semantic competitor effects and the visual world paradigm.

    PubMed

    Huettig, Falk; Altmann, Gerry T M

    2005-05-01

    When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; ]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word 'piano' when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word 'piano' unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.

  17. A Positivity Bias in Written and Spoken English and Its Moderation by Personality and Gender.

    PubMed

    Augustine, Adam A; Mehl, Matthias R; Larsen, Randy J

    2011-09-01

    The human tendency to use positive words ("adorable") more often than negative words ("dreadful") is called the linguistic positivity bias. We find evidence for this bias in two studies of word use, one based on written corpora and another based on naturalistic speech samples. In addition, we demonstrate that the positivity bias applies to nouns and verbs as well as adjectives. We also show that it is found to the same degree in written as well as spoken English. Moreover, personality traits and gender moderate the effect, such that persons high on extraversion and agreeableness and women display a larger positivity bias in naturalistic speech. Results are discussed in terms of how the linguistic positivity bias may serve as a mechanism for social facilitation. People, in general, and some people more than others, tend to talk about the brighter side of life.

  18. Zipf’s Law for Word Frequencies: Word Forms versus Lemmas in Long Texts

    PubMed Central

    Corral, Álvaro; Boleda, Gemma; Ferrer-i-Cancho, Ramon

    2015-01-01

    Zipf’s law is a fundamental paradigm in the statistics of written and spoken natural language as well as in other communication systems. We raise the question of the elementary units for which Zipf’s law should hold in the most natural way, studying its validity for plain word forms and for the corresponding lemma forms. We analyze several long literary texts comprising four languages, with different levels of morphological complexity. In all cases Zipf’s law is fulfilled, in the sense that a power-law distribution of word or lemma frequencies is valid for several orders of magnitude. We investigate the extent to which the word-lemma transformation preserves two parameters of Zipf’s law: the exponent and the low-frequency cut-off. We are not able to demonstrate a strict invariance of the tail, as for a few texts both exponents deviate significantly, but we conclude that the exponents are very similar, despite the remarkable transformation that going from words to lemmas represents, considerably affecting all ranges of frequencies. In contrast, the low-frequency cut-offs are less stable, tending to increase substantially after the transformation. PMID:26158787

  19. Semantic fluency in deaf children who use spoken and signed language in comparison with hearing peers

    PubMed Central

    Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2017-01-01

    Abstract Background Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language delays. Aims We compared deaf and hearing children's performance on a semantic fluency task. Optimal performance on this task requires a systematic search of the mental lexicon, the retrieval of words within a subcategory and, when that subcategory is exhausted, switching to a new subcategory. We compared retrieval patterns between groups, and also compared the responses of deaf children who used British Sign Language (BSL) with those who used spoken English. We investigated how semantic fluency performance related to children's expressive vocabulary and executive function skills, and also retested semantic fluency in the majority of the children nearly 2 years later, in order to investigate how much progress they had made in that time. Methods & Procedures Participants were deaf children aged 6–11 years (N = 106, comprising 69 users of spoken English, 29 users of BSL and eight users of Sign Supported English—SSE) compared with hearing children (N = 120) of the same age who used spoken English. Semantic fluency was tested for the category ‘animals’. We coded for errors, clusters (e.g., ‘pets’, ‘farm animals’) and switches. Participants also completed the Expressive One‐Word Picture Vocabulary Test and a battery of six non‐verbal executive function tasks. In addition, we collected follow‐up semantic fluency data for 70 deaf and 74 hearing children, nearly 2 years after they were first tested. Outcomes & Results Deaf children, whether using spoken or signed language, produced fewer items in the semantic fluency task than hearing children, but they showed similar patterns of responses for items

  20. Experimentally-induced Increases in Early Gesture Lead to Increases in Spoken Vocabulary

    PubMed Central

    LeBarton, Eve Sauer; Goldin-Meadow, Susan; Raudenbush, Stephen

    2014-01-01

    Differences in vocabulary that children bring with them to school can be traced back to the gestures they produce at 1;2, which, in turn, can be traced back to the gestures their parents produce at the same age (Rowe & Goldin-Meadow, 2009b). We ask here whether child gesture can be experimentally increased and, if so, whether the increases lead to increases in spoken vocabulary. Fifteen children aged 1;5 participated in an 8-week at-home intervention study (6 weekly training sessions plus follow-up 2 weeks later) in which all were exposed to object words, but only some were told to point at the named objects. Before each training session and at follow-up, children interacted naturally with caregivers to establish a baseline against which changes in communication were measured. Children who were told to gesture increased the number of gesture meanings they conveyed, not only during training but also during interactions with caregivers. These experimentally-induced increases in gesture led to larger spoken repertoires at follow-up. PMID:26120283

  1. Evaluation of the K[Formula: see text]ts'iìhtła ("We Light the Fire") Project: building resiliency and connections through strengths-based creative arts programming for Indigenous youth.

    PubMed

    Fanian, Sahar; Young, Stephanie K; Mantla, Mason; Daniels, Anita; Chatwood, Susan

    2015-01-01

    Background The creative arts - music, film, visual arts, dance, theatre, spoken word, literature, among others - are gradually being recognised as effective health promotion tools to empower, engage and improve the health and well-being in Indigenous youth communities. Arts-based programming has also had positive impacts in promoting health, mental wellness and resiliency amongst youth. However, often times the impacts and successes of such programming are not formally reported on, as reflected by the paucity of evaluations and reports in the literature. Objective The objective of this study was to evaluate a creative arts workshop for Tłįchǫ youth where youth explored critical community issues and found solutions together using the arts. We sought to identify the workshop's areas of success and challenge. Ultimately, our goal is to develop a community-led, youth-driven model to strengthen resiliency through youth engagement in the arts in circumpolar regions. Design Using a mixed-methods approach, we conducted observational field notes, focus groups, questionnaires, and reflective practice to evaluate the workshop. Four youth and five facilitators participated in this process overall. Results Youth reported gaining confidence and new skills, both artistic and personal. Many youth found the workshop to be engaging, enjoyable and culturally relevant. Youth expressed an interest in continuing their involvement with the arts and spreading their messages through art to other youth and others in their communities. Conclusions Engagement and participation in the arts have the potential to build resiliency, form relationships, and stimulate discussions for community change amongst youth living in the North.

  2. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    PubMed

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  3. Dyslexia in Adults: Evidence for Deficits in Non-Word Reading and in the Phonological Representation of Lexical Items.

    ERIC Educational Resources Information Center

    Elbro, Carsten; And Others

    1994-01-01

    Compared to controls, adults (n=102) who reported a history of difficulties in learning to read were disabled in phonological coding, but less disabled in reading comprehension. Adults with poor phonological coding skills had basic deficits in phonological representations of spoken words, even when semantic word knowledge, phonemic awareness,…

  4. Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area

    PubMed Central

    Yoncheva, Yuliya N.; Zevin, Jason D.; Maurer, Urs

    2010-01-01

    Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level–dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers. PMID:19571269

  5. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    PubMed

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  6. The process of spoken word recognition in the face of signal degradation.

    PubMed

    Farris-Trimble, Ashley; McMurray, Bob; Cigrand, Nicole; Tomblin, J Bruce

    2014-02-01

    Though much is known about how words are recognized, little research has focused on how a degraded signal affects the fine-grained temporal aspects of real-time word recognition. The perception of degraded speech was examined in two populations with the goal of describing the time course of word recognition and lexical competition. Thirty-three postlingually deafened cochlear implant (CI) users and 57 normal hearing (NH) adults (16 in a CI-simulation condition) participated in a visual world paradigm eye-tracking task in which their fixations to a set of phonologically related items were monitored as they heard one item being named. Each degraded-speech group was compared with a set of age-matched NH participants listening to unfiltered speech. CI users and the simulation group showed a delay in activation relative to the NH listeners, and there is weak evidence that the CI users showed differences in the degree of peak and late competitor activation. In general, though, the degraded-speech groups behaved statistically similarly with respect to activation levels. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Word Learning Deficits in Children With Dyslexia

    PubMed Central

    Hogan, Tiffany; Green, Samuel; Gray, Shelley; Cabbage, Kathryn; Cowan, Nelson

    2017-01-01

    Purpose The purpose of this study is to investigate word learning in children with dyslexia to ascertain their strengths and weaknesses during the configuration stage of word learning. Method Children with typical development (N = 116) and dyslexia (N = 68) participated in computer-based word learning games that assessed word learning in 4 sets of games that manipulated phonological or visuospatial demands. All children were monolingual English-speaking 2nd graders without oral language impairment. The word learning games measured children's ability to link novel names with novel objects, to make decisions about the accuracy of those names and objects, to recognize the semantic features of the objects, and to produce the names of the novel words. Accuracy data were analyzed using analyses of covariance with nonverbal intelligence scores as a covariate. Results Word learning deficits were evident for children with dyslexia across every type of manipulation and on 3 of 5 tasks, but not for every combination of task/manipulation. Deficits were more common when task demands taxed phonology. Visuospatial manipulations led to both disadvantages and advantages for children with dyslexia. Conclusion Children with dyslexia evidence spoken word learning deficits, but their performance is highly dependent on manipulations and task demand, suggesting a processing trade-off between visuospatial and phonological demands. PMID:28388708

  8. The Measure of a Black Life?: A Poetic Interpretation of Hope and Discontent

    ERIC Educational Resources Information Center

    William­-White, Lisa

    2013-01-01

    Here, Spoken Word poetics (William-White, 2011 a, 2011 b; William-White & White, 2011 ; William-White, 201 3) is utilized here to interpret and reflect on racialized violence and homicide in the United States. African American youth, particularly in urban communities, are disproportionately affected by violent crime, namely homicide when…

  9. Does segmental overlap help or hurt? Evidence from blocked cyclic naming in spoken and written production.

    PubMed

    Breining, Bonnie; Nozari, Nazbanou; Rapp, Brenda

    2016-04-01

    Past research has demonstrated interference effects when words are named in the context of multiple items that share a meaning. This interference has been explained within various incremental learning accounts of word production, which propose that each attempt at mapping semantic features to lexical items induces slight but persistent changes that result in cumulative interference. We examined whether similar interference-generating mechanisms operate during the mapping of lexical items to segments by examining the production of words in the context of others that share segments. Previous research has shown that initial-segment overlap amongst a set of target words produces facilitation, not interference. However, this initial-segment facilitation is likely due to strategic preparation, an external factor that may mask underlying interference. In the present study, we applied a novel manipulation in which the segmental overlap across target items was distributed unpredictably across word positions, in order to reduce strategic response preparation. This manipulation led to interference in both spoken (Exp. 1) and written (Exp. 2) production. We suggest that these findings are consistent with a competitive learning mechanism that applies across stages and modalities of word production.

  10. How Do Raters Judge Spoken Vocabulary?

    ERIC Educational Resources Information Center

    Li, Hui

    2016-01-01

    The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…

  11. MAWRID: A Model of Arabic Word Reading in Development.

    PubMed

    Saiegh-Haddad, Elinor

    2017-07-01

    This article offers a model of Arabic word reading according to which three conspicuous features of the Arabic language and orthography shape the development of word reading in this language: (a) vowelization/vocalization, or the use of diacritical marks to represent short vowels and other features of articulation; (b) morphological structure, namely, the predominance and transparency of derivational morphological structure in the linguistic and orthographic representation of the Arabic word; and (c) diglossia, specifically, the lexical and lexico-phonological distance between the spoken and the standard forms of Arabic words. It is argued that the triangulation of these features governs the acquisition and deployment of reading mechanisms across development. Moreover, the difficulties that readers encounter in their journey from beginning to skilled reading may be better understood if evaluated within these language-specific features of Arabic language and orthography.

  12. Lexical and Metrical Stress in Word Recognition: Lexical or Pre-Lexical Influences?

    ERIC Educational Resources Information Center

    Slowiaczek, Louisa M.; Soltano, Emily G.; Bernstein, Hilary L.

    2006-01-01

    The influence of lexical stress and/or metrical stress on spoken word recognition was examined. Two experiments were designed to determine whether response times in lexical decision or shadowing tasks are influenced when primes and targets share lexical stress patterns (JUVenile-BIBlical [Syllables printed in capital letters indicate those…

  13. The serial order of response units in word production: The case of typing.

    PubMed

    Scaltritti, Michele; Longcamp, Marieke; Alario, F-Xavier

    2018-05-01

    The selection and ordering of response units (phonemes, letters, keystrokes) represents a transversal issue across different modalities of language production. Here, the issue of serial order was investigated with respect to typewriting. Following seminal investigations in the spoken modality, we conducted an experiment where participants typed as many times as possible a pair of words during a fixed time-window. The 2 words shared either their first 2 keystrokes, the last 2 ones, all the keystrokes, or were unrelated. Fine-grained performance measures were recorded at the level of individual keystrokes. In contrast with previous results from the spoken modality, we observed an overall facilitation for words sharing the initial keystrokes. In addition, the initial overlap briefly delayed the execution of the following keystroke. The results are discussed with reference to different theoretical perspectives on serial order, with a particular attention to the competing accounts offered by position coding models and chaining models. Our findings point to potential major differences between the speaking and typing modalities in terms of interactive activation between lexical and response units processing levels. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Decoding spoken words using local field potentials recorded from the cortical surface

    NASA Astrophysics Data System (ADS)

    Kellis, Spencer; Miller, Kai; Thomson, Kyle; Brown, Richard; House, Paul; Greger, Bradley

    2010-10-01

    Pathological conditions such as amyotrophic lateral sclerosis or damage to the brainstem can leave patients severely paralyzed but fully aware, in a condition known as 'locked-in syndrome'. Communication in this state is often reduced to selecting individual letters or words by arduous residual movements. More intuitive and rapid communication may be restored by directly interfacing with language areas of the cerebral cortex. We used a grid of closely spaced, nonpenetrating micro-electrodes to record local field potentials (LFPs) from the surface of face motor cortex and Wernicke's area. From these LFPs we were successful in classifying a small set of words on a trial-by-trial basis at levels well above chance. We found that the pattern of electrodes with the highest accuracy changed for each word, which supports the idea that closely spaced micro-electrodes are capable of capturing neural signals from independent neural processing assemblies. These results further support using cortical surface potentials (electrocorticography) in brain-computer interfaces. These results also show that LFPs recorded from the cortical surface (micro-electrocorticography) of language areas can be used to classify speech-related cortical rhythms and potentially restore communication to locked-in patients.

  15. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    PubMed

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  16. Attention to Maternal Multimodal Naming by 6- to 8-Month-Old Infants and Learning of Word-Object Relations

    ERIC Educational Resources Information Center

    Gogate, Lakshmi J.; Bolzani, Laura H.; Betancourt, Eugene A.

    2006-01-01

    We examined whether mothers' use of temporal synchrony between spoken words and moving objects, and infants' attention to object naming, predict infants' learning of word-object relations. Following 5 min of free play, 24 mothers taught their 6- to 8-month-olds the names of 2 toy objects, "Gow" and "Chi," during a 3-min play…

  17. Words Spoken by Teachers to Primary-Level Classes of Deaf Children.

    ERIC Educational Resources Information Center

    Stuckless, E. Ross; Miller, Linda D.

    1987-01-01

    The study generated a list of the 1000 most frequently used words by teachers of hearing impaired children in six primary grade classes. Results have implications for real time captioning systems of communication. An alphabetical list and a list ordered by frequency of use are appended. (DB)

  18. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language.

    PubMed

    Williams, Joshua T; Newman, Sharlene D

    2017-02-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.

  19. Vocabulary Learning in a Yorkshire Terrier: Slow Mapping of Spoken Words

    PubMed Central

    Griebel, Ulrike; Oller, D. Kimbrough

    2012-01-01

    Rapid vocabulary learning in children has been attributed to “fast mapping”, with new words often claimed to be learned through a single presentation. As reported in 2004 in Science a border collie (Rico) not only learned to identify more than 200 words, but fast mapped the new words, remembering meanings after just one presentation. Our research tests the fast mapping interpretation of the Science paper based on Rico's results, while extending the demonstration of large vocabulary recognition to a lap dog. We tested a Yorkshire terrier (Bailey) with the same procedures as Rico, illustrating that Bailey accurately retrieved randomly selected toys from a set of 117 on voice command of the owner. Second we tested her retrieval based on two additional voices, one male, one female, with different accents that had never been involved in her training, again showing she was capable of recognition by voice command. Third, we did both exclusion-based training of new items (toys she had never seen before with names she had never heard before) embedded in a set of known items, with subsequent retention tests designed as in the Rico experiment. After Bailey succeeded on exclusion and retention tests, a crucial evaluation of true mapping tested items previously successfully retrieved in exclusion and retention, but now pitted against each other in a two-choice task. Bailey failed on the true mapping task repeatedly, illustrating that the claim of fast mapping in Rico had not been proven, because no true mapping task had ever been conducted with him. It appears that the task called retention in the Rico study only demonstrated success in retrieval by a process of extended exclusion. PMID:22363421

  20. Second language experience modulates word retrieval effort in bilinguals: evidence from pupillometry

    PubMed Central

    Schmidtke, Jens

    2014-01-01

    Bilingual speakers often have less language experience compared to monolinguals as a result of speaking two languages and/or a later age of acquisition of the second language. This may result in weaker and less precise phonological representations of words in memory, which may cause greater retrieval effort during spoken word recognition. To gauge retrieval effort, the present study compared the effects of word frequency, neighborhood density (ND), and level of English experience by testing monolingual English speakers and native Spanish speakers who differed in their age of acquisition of English (early/late). In the experimental paradigm, participants heard English words and matched them to one of four pictures while the pupil size, an indication of cognitive effort, was recorded. Overall, both frequency and ND effects could be observed in the pupil response, indicating that lower frequency and higher ND were associated with greater retrieval effort. Bilingual speakers showed an overall delayed pupil response and a larger ND effect compared to the monolingual speakers. The frequency effect was the same in early bilinguals and monolinguals but was larger in late bilinguals. Within the group of bilingual speakers, higher English proficiency was associated with an earlier pupil response in addition to a smaller frequency and ND effect. These results suggest that greater retrieval effort associated with bilingualism may be a consequence of reduced language experience rather than constitute a categorical bilingual disadvantage. Future avenues for the use of pupillometry in the field of spoken word recognition are discussed. PMID:24600428

  1. Early Action and Gesture "Vocabulary" and Its Relation with Word Comprehension and Production

    ERIC Educational Resources Information Center

    Caselli, Maria Cristina; Rinaldi, Pasquale; Stefanini, Silvia; Volterra, Virginia

    2012-01-01

    Data from 492 Italian infants (8-18 months) were collected with the parental questionnaire MacArthur Bates Communicative Development Inventories to describe early actions and gestures (A-G) "vocabulary" and its relation with spoken vocabulary in both comprehension and production. A-G were more strongly correlated with word comprehension…

  2. Speaker variability augments phonological processing in early word learning

    PubMed Central

    Rost, Gwyneth C.; McMurray, Bob

    2010-01-01

    Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e., word pairs that differ by a single phoneme), despite the ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top-down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom-up acoustic-phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single-speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them. PMID:19143806

  3. [A Spoken Word Count of Six-Year-Old Navajo Children, with Supplement--Complete Word List.] Navajo Reading Study Progress Report No. 10.

    ERIC Educational Resources Information Center

    Spolsky, Bernard; And Others

    As part of a study of the feasibility and effect of teaching Navajo children to read their own language first, a word count collected by 22 Navajo adults interviewing over 200 Navajo 6-year-olds was undertaken. This report discusses the word count and the interview texts in terms of (1) number of sentences, (2) number of words, (3) number of…

  4. Poetic Expressions: Students of Color Express Resiliency through Metaphors and Similes

    ERIC Educational Resources Information Center

    Hall, Horace R.

    2007-01-01

    The after-school City School Outreach youth program captured the attention of high school male students by offering them a physically and psychologically safe environment to talk about issues they faced. The students of color who attended the program used various forms of creative written expression (i.e., poetry, spoken word, and hip hop) to…

  5. Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words.

    PubMed

    Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M

    2017-04-01

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Lexical integration of novel words without sleep.

    PubMed

    Lindsay, Shane; Gaskell, M Gareth

    2013-03-01

    Learning a new word involves integration with existing lexical knowledge. Previous work has shown that sleep-associated memory consolidation processes are important for the engagement of novel items in lexical competition. In 3 experiments we used spaced exposure regimes to investigate memory for novel words and whether lexical integration can occur within a single day. The degree to which a new spoken word (e.g., cathedruke) engaged in lexical competition with established phonological neighbors (e.g., cathedral) was employed as a marker for lexical integration. We found evidence for improvements in recognition and cued recall following a time period including sleep, but we also found lexical competition effects emerging within a single day. Spaced exposure to novel words on its own did not bring about this within-day lexical competition effect (Experiment 2), which instead occurred with either spaced or massed exposure to novel words, provided that there was also spaced exposure to the phonological neighbors (Experiments 1 and 3). Although previous studies have indicated that sleep-dependent memory consolidation may be sufficient for lexical integration, our results show it is not a necessary precondition. (c) 2013 APA, all rights reserved.

  7. Non-Arbitrariness in Mapping Word Form to Meaning: Cross-Linguistic Formal Markers of Word Concreteness.

    PubMed

    Reilly, Jamie; Hung, Jinyi; Westbury, Chris

    2017-05-01

    Arbitrary symbolism is a linguistic doctrine that predicts an orthogonal relationship between word forms and their corresponding meanings. Recent corpora analyses have demonstrated violations of arbitrary symbolism with respect to concreteness, a variable characterizing the sensorimotor salience of a word. In addition to qualitative semantic differences, abstract and concrete words are also marked by distinct morphophonological structures such as length and morphological complexity. Native English speakers show sensitivity to these markers in tasks such as auditory word recognition and naming. One unanswered question is whether this violation of arbitrariness reflects an idiosyncratic property of the English lexicon or whether word concreteness is a marked phenomenon across other natural languages. We isolated concrete and abstract English nouns (N = 400), and translated each into Russian, Arabic, Dutch, Mandarin, Hindi, Korean, Hebrew, and American Sign Language. We conducted offline acoustic analyses of abstract and concrete word length discrepancies across languages. In a separate experiment, native English speakers (N = 56) with no prior knowledge of these foreign languages judged concreteness of these nouns (e.g., Can you see, hear, feel, or touch this? Yes/No). Each naïve participant heard pre-recorded words presented in randomized blocks of three foreign languages following a brief listening exposure to a narrative sample from each respective language. Concrete and abstract words differed by length across five of eight languages, and prediction accuracy exceeded chance for four of eight languages. These results suggest that word concreteness is a marked phenomenon across several of the world's most widely spoken languages. We interpret these findings as supportive of an adaptive cognitive heuristic that allows listeners to exploit non-arbitrary mappings of word form to word meaning. Copyright © 2016 Cognitive Science Society, Inc.

  8. Fast Mapping of Novel Word Forms Traced Neurophysiologically

    PubMed Central

    Shtyrov, Yury

    2011-01-01

    Human capacity to quickly learn new words, critical for our ability to communicate using language, is well-known from behavioral studies and observations, but its neural underpinnings remain unclear. In this study, we have used event-related potentials to record brain activity to novel spoken word forms as they are being learnt by the human nervous system through passive auditory exposure. We found that the brain response dynamics change dramatically within the short (20 min) exposure session: as the subjects become familiarized with the novel word forms, the early (∼100 ms) fronto-central activity they elicit increases in magnitude and becomes similar to that of known real words. At the same time, acoustically similar real words used as control stimuli show a relatively stable response throughout the recording session; these differences between the stimulus groups are confirmed using both factorial and linear regression analyses. Furthermore, acoustically matched novel non-speech stimuli do not demonstrate similar response increase, suggesting neural specificity of this rapid learning phenomenon to linguistic stimuli. Left-lateralized perisylvian cortical networks appear to be underlying such fast mapping of novel word forms unto the brain’s mental lexicon. PMID:22125543

  9. Subarashii: Encounters in Japanese Spoken Language Education.

    ERIC Educational Resources Information Center

    Bernstein, Jared; Najmi, Amir; Ehsani, Farzad

    1999-01-01

    Describes Subarashii, an experimental computer-based interactive spoken-language education system designed to understand what a student is saying in Japanese and respond in a meaningful way in spoken Japanese. Implementation of a preprototype version of the Subarashii system identified strengths and limitations of continuous speech recognition…

  10. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use over Time

    ERIC Educational Resources Information Center

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    2013-01-01

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson--Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semi-fixed multi-word units (MWUs), which comprise fixed parts with the potential…

  11. Building Spoken Language in the First Plane

    ERIC Educational Resources Information Center

    Bettmann, Joen

    2016-01-01

    Through a strong Montessori orientation to the parameters of spoken language, Joen Bettmann makes the case for "materializing" spoken knowledge using the stimulation of real objects and real situations that promote mature discussion around the sensorial aspect of the prepared environment. She lists specific materials in the classroom…

  12. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    PubMed Central

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading. PMID:28690507

  13. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults.

    PubMed

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  14. N400 brain responses to spoken phrases paired with photographs of scenes: implications for visual scene displays in AAC systems.

    PubMed

    Wilkinson, Krista M; Stutzman, Allyson; Seisler, Andrea

    2015-03-01

    Augmentative and alternative communication (AAC) systems are often implemented for individuals whose speech cannot meet their full communication needs. One type of aided display is called a Visual Scene Display (VSD). VSDs consist of integrated scenes (such as photographs) in which language concepts are embedded. Often, the representations of concepts on VSDs are perceptually similar to their referents. Given this physical resemblance, one may ask how well VSDs support development of symbolic functioning. We used brain imaging techniques to examine whether matches and mismatches between the content of spoken messages and photographic images of scenes evoke neural activity similar to activity that occurs to spoken or written words. Electroencephalography (EEG) was recorded from 15 college students who were shown photographs paired with spoken phrases that were either matched or mismatched to the concepts embedded within each photograph. Of interest was the N400 component, a negative deflecting wave 400 ms post-stimulus that is considered to be an index of semantic functioning. An N400 response in the mismatched condition (but not the matched) would replicate brain responses to traditional linguistic symbols. An N400 was found, exclusively in the mismatched condition, suggesting that mismatches between spoken messages and VSD-type representations set the stage for the N400 in ways similar to traditional linguistic symbols.

  15. The effects of speech production and vocabulary training on different components of spoken language performance.

    PubMed

    Paatsch, Louise E; Blamey, Peter J; Sarant, Julia Z; Bow, Catherine P

    2006-01-01

    A group of 21 hard-of-hearing and deaf children attending primary school were trained by their teachers on the production of selected consonants and on the meanings of selected words. Speech production, vocabulary knowledge, reading aloud, and speech perception measures were obtained before and after each type of training. The speech production training produced a small but significant improvement in the percentage of consonants correctly produced in words. The vocabulary training improved knowledge of word meanings substantially. Performance on speech perception and reading aloud were significantly improved by both types of training. These results were in accord with the predictions of a mathematical model put forward to describe the relationships between speech perception, speech production, and language measures in children (Paatsch, Blamey, Sarant, Martin, & Bow, 2004). These training data demonstrate that the relationships between the measures are causal. In other words, improvements in speech production and vocabulary performance produced by training will carry over into predictable improvements in speech perception and reading scores. Furthermore, the model will help educators identify the most effective methods of improving receptive and expressive spoken language for individual children who are deaf or hard of hearing.

  16. Combat Sports Bloggers, Mad Scientist Poets, and Comic Scriptwriters: Engaging Boys in Writing on Their Own Terms

    ERIC Educational Resources Information Center

    Loeper, Rachel

    2014-01-01

    As the program director of a community writing center that serves children and youth ages 5-18, Rachel Loeper sees it all, from 15-year-old spoken word poets to six-year-olds whose first "books" are strung together with yarn. In all of her roles--administrator, teacher, volunteer trainer--she values engaging the most reluctant of young…

  17. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    ERIC Educational Resources Information Center

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  18. Lexical frequency and voice assimilation in complex words in Dutch

    NASA Astrophysics Data System (ADS)

    Ernestus, Mirjam; Lahey, Mybeth; Verhees, Femke; Baayen, Harald

    2004-05-01

    Words with higher token frequencies tend to have more reduced acoustic realizations than lower frequency words (e.g., Hay, 2000; Bybee, 2001; Jurafsky et al., 2001). This study documents frequency effects for regressive voice assimilation (obstruents are voiced before voiced plosives) in Dutch morphologically complex words in the subcorpus of read-aloud novels in the corpus of spoken Dutch (Oostdijk et al., 2002). As expected, the initial obstruent of the cluster tends to be absent more often as lexical frequency increases. More importantly, as frequency increases, the duration of vocal-fold vibration in the cluster decreases, and the duration of the bursts in the cluster increases, after partialing out cluster duration. This suggests that there is less voicing for higher-frequency words. In fact, phonetic transcriptions show regressive voice assimilation for only half of the words and progressive voice assimilation for one third. Interestingly, the progressive voice assimilation observed for higher-frequency complex words renders these complex words more similar to monomorphemic words: Dutch monomorphemic words typically contain voiceless obstruent clusters (Zonneveld, 1983). Such high-frequency complex words may therefore be less easily parsed into their constituent morphemes (cf. Hay, 2000), favoring whole word lexical access (Bertram et al., 2000).

  19. The Impact of Diglossia on Voweled and Unvoweled Word Reading in Arabic: A Developmental Study from Childhood to Adolescence

    ERIC Educational Resources Information Center

    Saiegh-Haddad, Elinor; Schiff, Rachel

    2016-01-01

    All native speakers of Arabic read in a language variety that is remarkably distant from the one they use in everyday speech. The study tested the impact of this distance on reading accuracy and fluency by comparing reading of Standard Arabic (StA) words, used in StA only, versus Spoken Arabic (SpA) words, used in SpA too, among Arabic native…

  20. Six-Month-Olds Comprehend Words that Refer to Parts of the Body

    ERIC Educational Resources Information Center

    Tincoff, Ruth; Jusczyk, Peter W.

    2012-01-01

    Comprehending spoken words requires a lexicon of sound patterns and knowledge of their referents in the world. Tincoff and Jusczyk (1999) demonstrated that 6-month-olds link the sound patterns "Mommy" and "Daddy" to video images of their parents, but not to other adults. This finding suggests that comprehension emerges at this young age and might…

  1. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers

    PubMed Central

    Kanjlia, Shipra; Merabet, Lotfi B.

    2017-01-01

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the “VWFA” is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We

  2. A Platform for Multilingual Research in Spoken Dialogue Systems

    DTIC Science & Technology

    2000-08-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010384 TITLE: A Platform for Multilingual Research in Spoken Dialogue...Ronald A. Cole*, Ben Serridge§, John-Paul Hosomý, Andrew Cronke, and Ed Kaiser* *Center for Spoken Language Understanding ; University of Colorado...Boulder; Boulder, CO, 80309, USA §Universidad de las Americas; 72820 Santa Catarina Martir; Puebla, Mexico *Center for Spoken Language Understanding (CSLU

  3. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    PubMed

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  4. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    PubMed Central

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  5. Design and performance of a large vocabulary discrete word recognition system. Volume 1: Technical report. [real time computer technique for voice data processing

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The development, construction, and test of a 100-word vocabulary near real time word recognition system are reported. Included are reasonable replacement of any one or all 100 words in the vocabulary, rapid learning of a new speaker, storage and retrieval of training sets, verbal or manual single word deletion, continuous adaptation with verbal or manual error correction, on-line verification of vocabulary as spoken, system modes selectable via verification display keyboard, relationship of classified word to neighboring word, and a versatile input/output interface to accommodate a variety of applications.

  6. Pupillary Responses to Words That Convey a Sense of Brightness or Darkness

    PubMed Central

    Mathôt, Sebastiaan; Grainger, Jonathan; Strijkers, Kristof

    2017-01-01

    Theories about embodiment of language hold that when you process a word’s meaning, you automatically simulate associated sensory input (e.g., perception of brightness when you process lamp) and prepare associated actions (e.g., finger movements when you process typing). To test this latter prediction, we measured pupillary responses to single words that conveyed a sense of brightness (e.g., day) or darkness (e.g., night) or were neutral (e.g., house). We found that pupils were largest for words conveying darkness, of intermediate size for neutral words, and smallest for words conveying brightness. This pattern was found for both visually presented and spoken words, which suggests that it was due to the words’ meanings, rather than to visual or auditory properties of the stimuli. Our findings suggest that word meaning is sufficient to trigger a pupillary response, even when this response is not imposed by the experimental task, and even when this response is beyond voluntary control. PMID:28613135

  7. Dyslexia in adults: Evidence for deficits in non-word reading and in the phonological representation of lexical items.

    PubMed

    Elbro, C; Nielsen, I; Petersen, D K

    1994-01-01

    Difficulties in reading and language skills which persist from childhood into adult life are the concerns of this article. The aims were twofold: (1) to find measures of adult reading processes that validate adults' retrospective reports of difficulties in learning to read during the school years, and (2) to search for indications of basic deficits in phonological processing that may point toward underlying causes of reading difficulties. Adults who reported a history of difficulties in learning to read (n=102) were distinctly disabled in phonological coding in reading, compared to adults without similar histories (n=56). They were less disabled in the comprehension of written passages, and the comprehension disability was explained by the phonological difficulties. A number of indications were found that adults with poor phonological coding skills in reading (i.e., dyslexia) have basic deficits in phonological representations of spoken words, even when semantic word knowledge, phonemic awareness, educational level, and daily reading habits are taken into account. It is suggested that dyslexics possess less distinct phonological representations of spoken words.

  8. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain.

    PubMed

    Higgins, Irina; Stringer, Simon; Schnupp, Jan

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.

  9. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain

    PubMed Central

    Stringer, Simon

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable. PMID:28797034

  10. The influence of contextual diversity on word learning.

    PubMed

    Johns, Brendan T; Dye, Melody; Jones, Michael N

    2016-08-01

    In a series of analyses over mega datasets, Jones, Johns, and Recchia (Canadian Journal of Experimental Psychology, 66(2), 115-124, 2012) and Johns et al. (Journal of the Acoustical Society of America, 132:2, EL74-EL80, 2012) found that a measure of contextual diversity that takes into account the semantic variability of a word's contexts provided a better fit to both visual and spoken word recognition data than traditional measures, such as word frequency or raw context counts. This measure was empirically validated with an artificial language experiment (Jones et al.). The present study extends the empirical results with a unique natural language learning paradigm, which allows for an examination of the semantic representations that are acquired as semantic diversity is varied. Subjects were incidentally exposed to novel words as they rated short selections from articles, books, and newspapers. When novel words were encountered across distinct discourse contexts, subjects were both faster and more accurate at recognizing them than when they were seen in redundant contexts. However, learning across redundant contexts promoted the development of more stable semantic representations. These findings are predicted by a distributional learning model trained on the same materials as our subjects.

  11. Compound nouns in spoken language production by speakers with aphasia compared to neurologically healthy speakers: an exploratory study.

    PubMed

    Eiesland, Eli Anne; Lind, Marianne

    2012-03-01

    Compounds are words that are made up of at least two other words (lexemes), featuring lexical and syntactic characteristics and thus particularly interesting for the study of language processing. Most studies of compounds and language processing have been based on data from experimental single word production and comprehension tasks. To enhance the ecological validity of morphological processing research, data from other contexts, such as discourse production, need to be considered. This study investigates the production of nominal compounds in semi-spontaneous spoken texts by a group of speakers with fluent types of aphasia compared to a group of neurologically healthy speakers. The speakers with aphasia produce significantly fewer nominal compound types in their texts than the non-aphasic speakers, and the compounds they produce exhibit fewer different types of semantic relations than the compounds produced by the non-aphasic speakers. The results are discussed in relation to theories of language processing.

  12. Gated Word Recognition by Postlingually Deafened Adults with Cochlear Implants: Influence of Semantic Context

    ERIC Educational Resources Information Center

    Patro, Chhayakanta; Mendel, Lisa Lucks

    2018-01-01

    Purpose: The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. Method: Listeners with CIs as well as those with normal hearing (NH)…

  13. Cognitive, Linguistic and Print-Related Predictors of Preschool Children's Word Spelling and Name Writing

    ERIC Educational Resources Information Center

    Milburn, Trelani F.; Hipfner-Boucher, Kathleen; Weitzman, Elaine; Greenberg, Janice; Pelletier, Janette; Girolametto, Luigi

    2017-01-01

    Preschool children begin to represent spoken language in print long before receiving formal instruction in spelling and writing. The current study sought to identify the component skills that contribute to preschool children's ability to begin to spell words and write their name. Ninety-five preschool children (mean age = 57 months) completed a…

  14. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    PubMed

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  15. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  16. When does word frequency influence written production?

    PubMed

    Baus, Cristina; Strijkers, Kristof; Costa, Albert

    2013-01-01

    The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.

  17. When does word frequency influence written production?

    PubMed Central

    Baus, Cristina; Strijkers, Kristof; Costa, Albert

    2013-01-01

    The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution. PMID:24399980

  18. When does length cause the word length effect?

    PubMed

    Jalbert, Annie; Neath, Ian; Bireta, Tamra J; Surprenant, Aimée M

    2011-03-01

    The word length effect, the finding that lists of short words are better recalled than lists of long words, has been termed one of the benchmark findings that any theory of immediate memory must account for. Indeed, the effect led directly to the development of working memory and the phonological loop, and it is viewed as the best remaining evidence for time-based decay. However, previous studies investigating this effect have confounded length with orthographic neighborhood size. In the present study, Experiments 1A and 1B revealed typical effects of length when short and long words were equated on all relevant dimensions previously identified in the literature except for neighborhood size. In Experiment 2, consonant-vowel-consonant (CVC) words with a large orthographic neighborhood were better recalled than were CVC words with a small orthographic neighborhood. In Experiments 3 and 4, using two different sets of stimuli, we showed that when short (1-syllable) and long (3-syllable) items were equated for neighborhood size, the word length effect disappeared. Experiment 5 replicated this with spoken recall. We suggest that the word length effect may be better explained by the differences in linguistic and lexical properties of short and long words rather than by length per se. These results add to the growing literature showing problems for theories of memory that include decay offset by rehearsal as a central feature. 2011 APA, all rights reserved

  19. Affective Congruence between Sound and Meaning of Words Facilitates Semantic Decision.

    PubMed

    Aryani, Arash; Jacobs, Arthur M

    2018-05-31

    A similarity between the form and meaning of a word (i.e., iconicity) may help language users to more readily access its meaning through direct form-meaning mapping. Previous work has supported this view by providing empirical evidence for this facilitatory effect in sign language, as well as for onomatopoetic words (e.g., cuckoo) and ideophones (e.g., zigzag). Thus, it remains largely unknown whether the beneficial role of iconicity in making semantic decisions can be considered a general feature in spoken language applying also to "ordinary" words in the lexicon. By capitalizing on the affective domain, and in particular arousal, we organized words in two distinctive groups of iconic vs. non-iconic based on the congruence vs. incongruence of their lexical (meaning) and sublexical (sound) arousal. In a two-alternative forced choice task, we asked participants to evaluate the arousal of printed words that were lexically either high or low arousing. In line with our hypothesis, iconic words were evaluated more quickly and more accurately than their non-iconic counterparts. These results indicate a processing advantage for iconic words, suggesting that language users are sensitive to sound-meaning mappings even when words are presented visually and read silently.

  20. Tracking Eye Movements to Localize Stroop Interference in Naming: Word Planning versus Articulatory Buffering

    ERIC Educational Resources Information Center

    Roelofs, Ardi

    2014-01-01

    Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the interference occurs in an articulatory buffer after word…

  1. Acquiring Novel Words and Their Past Tenses: Evidence from Lexical Effects on Phonetic Categorisation

    ERIC Educational Resources Information Center

    Lindsay, Shane; Sedin, Leanne M.; Gaskell, M. Gareth

    2012-01-01

    Two experiments addressed how novel verbs come to be represented in the auditory input lexicon, and how the inflected forms of such novel words are acquired and recognised. Participants were introduced to new spoken forms as uninflected verbs. These varied in whether they contained a final /d/ (e.g., "confald" or "confal"). Either immediately…

  2. Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.

    PubMed

    Brimo, Danielle; Lund, Emily; Sapp, Alysha

    2018-05-01

    Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below

  3. Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults With Cochlear Implants.

    PubMed

    Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee; Wingfield, Arthur

    The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a

  4. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English.

    PubMed

    Banzina, Elina; Dilley, Laura C; Hewitt, Lynne E

    2016-08-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found, likely due to a transfer of native phonological features. Next, a cross-modal phonological priming technique combined with a lexical decision task assessed the effect of inaccurate SS and UU syllable productions on native American English listeners' speech processing. Inaccurate UU vowels led to significant inhibition of lexical access, while reduced SS vowels revealed less interference. The results have implications for understanding the role of SS and UU syllables for word recognition and English pronunciation instruction.

  5. The Beat of Boyle Street: Empowering Aboriginal Youth through Music Making

    ERIC Educational Resources Information Center

    Wang, Elaine L.

    2010-01-01

    An irrepressibly popular musical phenomenon, hip-hop is close to spoken word and focuses on lyrics with a message, reviving local traditions of song that tell histories, counsel listeners, and challenge participants to outdo one another in clever exchanges. A hip-hop music-making program in Edmonton, Canada, successfully reengages at-risk…

  6. The Association Between Positive Relationships with Adults and Suicide-Attempt Resilience in American Indian Youth in New Mexico.

    PubMed

    FitzGerald, Courtney A; Fullerton, Lynne; Green, Dan; Hall, Meryn; Peñaloza, Linda J

    2017-01-01

    This study examined the 2013 New Mexico Youth Risk and Resiliency Survey (NM-YRRS) to determine whether cultural connectedness and positive relationships with adults protected against suicide attempts among American Indian and Alaska Native (AI/AN) youth and whether these relationships differed by gender. The sample included 2,794 AI/AN students in grades 9 to 12 who answered the question about past-year suicide attempts. Protective factor variables tested included relationships with adults at home, school, and the community. The language spoken at home was used as a proxy measure for cultural connectedness. Positive relationships with adults were negatively associated with the prevalence of past-year suicide attempts in bivariate analysis. However, language spoken at home was not associated with the prevalence of suicide attempts. Multivariate analysis showed that among girls, relationships with adults at home, at school, and in the community were independently associated with lower suicide-attempt prevalence. Among boys, only relationships with adults at home showed such an association. These results have important implications for the direction of future research about protective factors associated with AI/AN youth suicide risk as well as in the design of suicide intervention and prevention programs.

  7. Direction Asymmetries in Spoken and Signed Language Interpreting

    ERIC Educational Resources Information Center

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  8. The statistical trade-off between word order and word structure – Large-scale evidence for the principle of least effort

    PubMed Central

    Koplenig, Alexander; Meyer, Peter; Wolfer, Sascha; Müller-Spitzer, Carolin

    2017-01-01

    Languages employ different strategies to transmit structural and grammatical information. While, for example, grammatical dependency relationships in sentences are mainly conveyed by the ordering of the words for languages like Mandarin Chinese, or Vietnamese, the word ordering is much less restricted for languages such as Inupiatun or Quechua, as these languages (also) use the internal structure of words (e.g. inflectional morphology) to mark grammatical relationships in a sentence. Based on a quantitative analysis of more than 1,500 unique translations of different books of the Bible in almost 1,200 different languages that are spoken as a native language by approximately 6 billion people (more than 80% of the world population), we present large-scale evidence for a statistical trade-off between the amount of information conveyed by the ordering of words and the amount of information conveyed by internal word structure: languages that rely more strongly on word order information tend to rely less on word structure information and vice versa. Or put differently, if less information is carried within the word, more information has to be spread among words in order to communicate successfully. In addition, we find that–despite differences in the way information is expressed–there is also evidence for a trade-off between different books of the biblical canon that recurs with little variation across languages: the more informative the word order of the book, the less informative its word structure and vice versa. We argue that this might suggest that, on the one hand, languages encode information in very different (but efficient) ways. On the other hand, content-related and stylistic features are statistically encoded in very similar ways. PMID:28282435

  9. Long-lasting attentional influence of negative and taboo words in an auditory variant of the emotional Stroop task.

    PubMed

    Bertels, Julie; Kolinsky, Régine; Pietrons, Elise; Morais, José

    2011-02-01

    Using an auditory adaptation of the emotional and taboo Stroop tasks, the authors compared the effects of negative and taboo spoken words in mixed and blocked designs. Both types of words elicited carryover effects with mixed presentations and interference with blocked presentations, suggesting similar long-lasting attentional effects. Both were also relatively resilient to the long-lasting influence of the preceding emotional word. Hence, contrary to what has been assumed (Schmidt & Saari, 2007), negative and taboo words do not seem to differ in terms of the temporal dynamics of the interdimensional shifting, at least in the auditory modality. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  10. Multimodal therapy of word retrieval disorder due to phonological encoding dysfunction.

    PubMed

    Weill-Chounlamountry, Agnès; Capelle, Nathalie; Tessier, Catherine; Pradat-Diehl, Pascale

    2013-01-01

    To determine whether phonological multimodal therapy can improve naming and communication in a patient showing a lexical phonological naming disorder. This study employed oral and written learning tasks, using an error reduction procedure. A single-case design computer-assisted treatment was used with a 52 year-old woman with fluent aphasia consecutive to a cerebral infarction. The cognitive analysis of her word retrieval disorder exhibited a phonological encoding dysfunction. Thus, a phonological procedure was designed addressing the output phonological lexicon using computer analysis of spoken and written words. The effects were tested for trained words, generalization to untrained words, maintenance and specificity. Transfer of improvement to daily life was also assessed. After therapy, the verbal naming of both trained and untrained words was improved at p < 0.001. The improvement was still maintained after 3 months without therapy. This treatment was specific since the word dictation task did not change. Communication in daily life was improved at p < 0.05. This study of a patient with word retrieval disorder due to phonological encoding dysfunction demonstrated the effectiveness of a phonological and multimodal therapeutic treatment.

  11. Children of Few Words: Relations Among Selective Mutism, Behavioral Inhibition, and (Social) Anxiety Symptoms in 3- to 6-Year-Olds.

    PubMed

    Muris, Peter; Hendriks, Eline; Bot, Suili

    2016-02-01

    Children with selective mutism (SM) fail to speak in specific public situations (e.g., school), despite speaking normally in other situations (e.g., at home). The current study explored the phenomenon of SM in a sample of 57 non-clinical children aged 3-6 years. Children performed two speech tasks to assess their absolute amount of spoken words, while their parents completed questionnaires for measuring children's levels of SM, social anxiety and non-social anxiety symptoms as well as the temperament characteristic of behavioral inhibition. The results indicated that high levels of parent-reported SM were primarily associated with high levels of social anxiety symptoms. The number of spoken words was negatively related to behavioral inhibition: children with a more inhibited temperament used fewer words during the speech tasks. Future research is necessary to test whether the temperament characteristic of behavioral inhibition prompts children to speak less in novel social situations, and whether it is mainly social anxiety that turns this taciturnity into the psychopathology of SM.

  12. Presentation video retrieval using automatically recovered slide and spoken text

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  13. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers.

    PubMed

    Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina

    2017-11-22

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We

  14. Spoken Language Processing in the Clarissa Procedure Browser

    NASA Technical Reports Server (NTRS)

    Rayner, M.; Hockey, B. A.; Renders, J.-M.; Chatzichrisafis, N.; Farrell, K.

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station, is as far as we know the first spoken dialog system in space. We describe the objectives of the Clarissa project and the system's architecture. In particular, we focus on three key problems: grammar-based speech recognition using the Regulus toolkit; methods for open mic speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations. We first describe the grammar-based recogniser we have build using Regulus, and report experiments where we compare it against a class N-gram recogniser trained off the same 3297 utterance dataset. We obtained a 15% relative improvement in WER and a 37% improvement in semantic error rate. The grammar-based recogniser moreover outperforms the class N-gram version for utterances of all lengths from 1 to 9 words inclusive. The central problem in building an open-mic speech recognition system is being able to distinguish between commands directed at the system, and other material (cross-talk), which should be rejected. Most spoken dialogue systems make the accept/reject decision by applying a threshold to the recognition confidence score. NASA shows how a simple and general method, based on standard approaches to document classification using Support Vector Machines, can give substantially better performance, and report experiments showing a relative reduction in the task-level error rate by about 25% compared to the baseline confidence threshold method. Finally, we describe a general side-effect free dialogue management architecture that we have implemented in Clarissa, which extends the "update semantics'' framework by including task as well as dialogue information in the information state. We show that this enables elegant treatments of several dialogue management problems, including corrections, confirmations, querying of the environment, and regression

  15. Naturalistic and Experimental Analyses of Word Frequency and Neighborhood Density Effects in Slips of the Ear*

    PubMed Central

    Vitevitch, Michael S.

    2008-01-01

    A comparison of the lexical characteristics of 88 auditory misperceptions (i.e., slips of the ear) showed no difference in word-frequency, neighborhood density, and neighborhood frequency between the actual and the perceived utterances. Another comparison of slip of the ear tokens (i.e., actual and perceived utterances) and words in general (i.e., randomly selected from the lexicon) showed that slip of the ear tokens had denser neighborhoods and higher neighborhood frequency than words in general, as predicted from laboratory studies. Contrary to prediction, slip of the ear tokens were higher in frequency of occurrence than words in general. Additional laboratory-based investigations examined the possible source of the contradictory word frequency finding, highlighting the importance of using naturalistic and experimental data to develop models of spoken language processing. PMID:12866911

  16. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    PubMed Central

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  17. Word-length effect in verbal short-term memory in individuals with Down's syndrome.

    PubMed

    Kanno, K; Ikeda, Y

    2002-11-01

    Many studies have indicated that individuals with Down's syndrome (DS) show a specific deficit in short-term memory for verbal information. The aim of the present study was to investigate the influence of the length of words on verbal short-term memory in individuals with DS. Twenty-eight children with DS and 10 control participants matched for memory span were tested on verbal serial recall and speech rate, which are thought to involve rehearsal and output speed. Although a significant word-length effect was observed in both groups for the recall of a larger number of items with a shorter spoken duration than for those with a longer spoken duration, the number of correct recalls in the group with DS was reduced compared to the control subjects. The results demonstrating poor short-term memory in children with DS were irrelevant to speech rate. In addition, the proportion of repetition-gained errors in serial recall was higher in children with DS than in control subjects. The present findings suggest that poor access to long-term lexical knowledge, rather than overt articulation speed, constrains verbal short-term memory functions in individuals with DS.

  18. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    PubMed

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  19. "Visual" Cortex Responds to Spoken Language in Blind Children.

    PubMed

    Bedny, Marina; Richardson, Hilary; Saxe, Rebecca

    2015-08-19

    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.

  20. Teaching the Spoken Language.

    ERIC Educational Resources Information Center

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  1. Diminutives facilitate word segmentation in natural speech: cross-linguistic evidence.

    PubMed

    Kempe, Vera; Brooks, Patricia J; Gillis, Steven; Samson, Graham

    2007-06-01

    Final-syllable invariance is characteristic of diminutives (e.g., doggie), which are a pervasive feature of the child-directed speech registers of many languages. Invariance in word endings has been shown to facilitate word segmentation (Kempe, Brooks, & Gillis, 2005) in an incidental-learning paradigm in which synthesized Dutch pseudonouns were used. To broaden the cross-linguistic evidence for this invariance effect and to increase its ecological validity, adult English speakers (n=276) were exposed to naturally spoken Dutch or Russian pseudonouns presented in sentence contexts. A forced choice test was given to assess target recognition, with foils comprising unfamiliar syllable combinations in Experiments 1 and 2 and syllable combinations straddling word boundaries in Experiment 3. A control group (n=210) received the recognition test with no prior exposure to targets. Recognition performance improved with increasing final-syllable rhyme invariance, with larger increases for the experimental group. This confirms that word ending invariance is a valid segmentation cue in artificial, as well as naturalistic, speech and that diminutives may aid segmentation in a number of languages.

  2. The interface between morphology and phonology: Exploring a morpho-phonological deficit in spoken production

    PubMed Central

    Cohen-Goldberg, Ariel M.; Cholin, Joana; Miozzo, Michele; Rapp, Brenda

    2013-01-01

    Morphological and phonological processes are tightly interrelated in spoken production. During processing, morphological processes must combine the phonological content of individual morphemes to produce a phonological representation that is suitable for driving phonological processing. Further, morpheme assembly frequently causes changes in a word's phonological well-formedness that must be addressed by the phonology. We report the case of an aphasic individual (WRG) who exhibits an impairment at the morpho-phonological interface. WRG was tested on his ability to produce phonologically complex sequences (specifically, coda clusters of varying sonority) in heteromorphemic and tautomorphemic environments. WRG made phonological errors that reduced coda sonority complexity in multimorphemic words (e.g., passed→[pæstɪd]) but not in monomorphemic words (e.g., past). WRG also made similar insertion errors to repair stress clash in multimorphemic environments, confirming his sensitivity to cross-morpheme well-formedness. We propose that this pattern of performance is the result of an intact phonological grammar acting over the phonological content of morphemic representations that were weakly joined because of brain damage. WRG may constitute the first case of a morpho-phonological impairment—these results suggest that the processes that combine morphemes constitute a crucial component of morpho-phonological processing. PMID:23466641

  3. Tracing Attention and the Activation Flow of Spoken Word Planning Using Eye Movements

    ERIC Educational Resources Information Center

    Roelofs, Ardi

    2008-01-01

    The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements and naming latencies were recorded. The…

  4. Spoken language development in children following cochlear implantation.

    PubMed

    Niparko, John K; Tobey, Emily A; Thal, Donna J; Eisenberg, Laurie S; Wang, Nae-Yuh; Quittner, Alexandra L; Fink, Nancy E

    2010-04-21

    Cochlear implantation is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe to profound sensorineural hearing loss (SNHL). To prospectively assess spoken language acquisition following cochlear implantation in young children. Prospective, longitudinal, and multidimensional assessment of spoken language development over a 3-year period in children who underwent cochlear implantation before 5 years of age (n = 188) from 6 US centers and hearing children of similar ages (n = 97) from 2 preschools recruited between November 2002 and December 2004. Follow-up completed between November 2005 and May 2008. Performance on measures of spoken language comprehension and expression (Reynell Developmental Language Scales). Children undergoing cochlear implantation showed greater improvement in spoken language performance (10.4; 95% confidence interval [CI], 9.6-11.2 points per year in comprehension; 8.4; 95% CI, 7.8-9.0 in expression) than would be predicted by their preimplantation baseline scores (5.4; 95% CI, 4.1-6.7, comprehension; 5.8; 95% CI, 4.6-7.0, expression), although mean scores were not restored to age-appropriate levels after 3 years. Younger age at cochlear implantation was associated with significantly steeper rate increases in comprehension (1.1; 95% CI, 0.5-1.7 points per year younger) and expression (1.0; 95% CI, 0.6-1.5 points per year younger). Similarly, each 1-year shorter history of hearing deficit was associated with steeper rate increases in comprehension (0.8; 95% CI, 0.2-1.2 points per year shorter) and expression (0.6; 95% CI, 0.2-1.0 points per year shorter). In multivariable analyses, greater residual hearing prior to cochlear implantation, higher ratings of parent-child interactions, and higher socioeconomic status were associated with greater rates of improvement in comprehension and expression. The use of cochlear implants in young children was

  5. Spoken Language Development in Children Following Cochlear Implantation

    PubMed Central

    Niparko, John K.; Tobey, Emily A.; Thal, Donna J.; Eisenberg, Laurie S.; Wang, Nae-Yuh; Quittner, Alexandra L.; Fink, Nancy E.

    2010-01-01

    Context Cochlear implantation (CI) is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe-to-profound sensorineural hearing loss (SNHL). Objective To prospectively assess spoken language acquisition following CI in young children with adjustment of co-variates. Design, Setting, and Participants Prospective, longitudinal, and multidimensional assessment of spoken language growth over a 3-year period following CI. Prospective cohort study of children who underwent CI before 5 years of age (n=188) from 6 US centers and hearing children of similar ages (n=97) from 2 preschools recruited between November, 2002 and December, 2004. Follow-up completed between November, 2005 and May, 2008. Main Outcome Measures Performance on measures of spoken language comprehension and expression. Results Children undergoing CI showed greater growth in spoken language performance (10.4;[95% confidence interval: 9.6–11.2] points/year in comprehension; 8.4;[7.8–9.0] in expression) than would be predicted by their pre-CI baseline scores (5.4;[4.1–6.7] comprehension; 5.8;[4.6–7.0] expression). Although mean scores were not restored to age-appropriate levels after 3 years, significantly greater annual rates of language acquisition were observed in children who were younger at CI (1.1;[0.5–1.7] points in comprehension per year younger; 1.0;[0.6–1.5] in expression), and in children with shorter histories of hearing deficit (0.8;[0.2,1.2] points in comprehension per year shorter; 0.6;[0.2–1.0] for expression). In multivariable analyses, greater residual hearing prior to CI, higher ratings of parent-child interactions, and higher SES associated with greater rates of growth in comprehension and expression. Conclusions The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their pre-implantation scores. However

  6. Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success

    PubMed Central

    Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.

    2013-01-01

    The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625

  7. Improved word comprehension in Global aphasia using a modified semantic feature analysis treatment.

    PubMed

    Munro, Philippa; Siyambalapitiya, Samantha

    2017-01-01

    Limited research has investigated treatment of single word comprehension in people with aphasia, despite numerous studies examining treatment of naming deficits. This study employed a single case experimental design to examine efficacy of a modified semantic feature analysis (SFA) therapy in improving word comprehension in an individual with Global aphasia, who presented with a semantically based comprehension impairment. Ten treatment sessions were conducted over a period of two weeks. Following therapy, the participant demonstrated improved comprehension of treatment items and generalisation to control items, measured by performance on a spoken word picture matching task. Improvements were also observed on other language assessments (e.g. subtests of WAB-R; PALPA subtest 47) and were largely maintained over a period of 12 weeks without further therapy. This study provides support for the efficacy of a modified SFA therapy in remediating single word comprehension in individuals with aphasia with a semantically based comprehension deficit.

  8. It's a Mad, Mad Wordle: For a New Take on Text, Try This Fun Word Cloud Generator

    ERIC Educational Resources Information Center

    Foote, Carolyn

    2009-01-01

    Nation. New. Common. Generation. These are among the most frequently used words spoken by President Barack Obama in his January 2009 inauguration speech as seen in a fascinating visual display called a Wordle. Educators, too, can harness the power of Wordle to enhance learning. Imagine providing students with a whole new perspective on…

  9. Selective attention in perceptual adjustments to voice.

    PubMed

    Mullennix, J W; Howe, J N

    1999-10-01

    The effects of perceptual adjustments to voice information on the perception of isolated spoken words were examined. In two experiments, spoken target words were preceded or followed within a trial by a neutral word spoken in the same voice or in a different voice as the target. Over-all, words were reproduced more accurately on trials on which the voice of the neutral word matched the voice of the spoken target word, suggesting that perceptual adjustments to voice interfere with word processing. This result, however, was mediated by selective attention to voice. The results provide further evidence of a close processing relationship between perceptual adjustments to voice and spoken word recognition.

  10. Enhancing the Performance of Female Students in Spoken English

    ERIC Educational Resources Information Center

    Inegbeboh, Bridget O.

    2009-01-01

    Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…

  11. Automatic processing of spoken dialogue in the home hemodialysis domain.

    PubMed

    Lacson, Ronilda; Barzilay, Regina

    2005-01-01

    Spoken medical dialogue is a valuable source of information, and it forms a foundation for diagnosis, prevention and therapeutic management. However, understanding even a perfect transcript of spoken dialogue is challenging for humans because of the lack of structure and the verbosity of dialogues. This work presents a first step towards automatic analysis of spoken medical dialogue. The backbone of our approach is an abstraction of a dialogue into a sequence of semantic categories. This abstraction uncovers structure in informal, verbose conversation between a caregiver and a patient, thereby facilitating automatic processing of dialogue content. Our method induces this structure based on a range of linguistic and contextual features that are integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). This work demonstrates the feasibility of automatically processing spoken medical dialogue.

  12. The semantics of prosody: acoustic and perceptual evidence of prosodic correlates to word meaning.

    PubMed

    Nygaard, Lynne C; Herold, Debora S; Namy, Laura L

    2009-01-01

    This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language. Copyright © 2009 Cognitive Science Society, Inc.

  13. Social Markers of Mild Cognitive Impairment: Proportion of Word Counts in Free Conversational Speech.

    PubMed

    Dodge, Hiroko H; Mattek, Nora; Gregor, Mattie; Bowman, Molly; Seelye, Adriana; Ybarra, Oscar; Asgari, Meysam; Kaye, Jeffrey A

    2015-01-01

    identifying MCI (vs. normals) was 0.71 (95% Confidence Interval: 0.54 - 0.89) when average proportion of word counts spoken by subjects was included univariately into the model. An ecologically valid social marker such as the proportion of spoken words produced during spontaneous conversations may be sensitive to transitions from normal cognition to MCI.

  14. Selectivity of lexical-semantic disorders in Polish-speaking patients with aphasia: evidence from single-word comprehension.

    PubMed

    Jodzio, Krzysztof; Biechowska, Daria; Leszniewska-Jodzio, Barbara

    2008-09-01

    Several neuropsychological studies have shown that patients with brain damage may demonstrate selective category-specific deficits of auditory comprehension. The present paper reports on an investigation of aphasic patients' preserved ability to perform a semantic task on spoken words despite severe impairment in auditory comprehension, as shown by failure in matching spoken words to pictured objects. Twenty-six aphasic patients (11 women and 15 men) with impaired speech comprehension due to a left-hemisphere ischaemic stroke were examined; all were right-handed and native speakers of Polish. Six narrowly defined semantic categories for which dissociations have been reported are colors, body parts, animals, food, objects (mostly tools), and means of transportation. An analysis using one-way ANOVA with repeated measures in conjunction with the Lambda-Wilks Test revealed significant discrepancies among these categories in aphasic patients, who had much more difficulty comprehending names of colors than they did comprehending names of other objects (F((5,21))=13.15; p<.001). Animals were most often the easiest category to understand. The possibility of a simple explanation in terms of word frequency and/or visual complexity was ruled out. Evidence from the present study support the position that so called "global" aphasia is an imprecise term and should be redefined. These results are discussed within the connectionist and modular perspectives on category-specific deficits in aphasia.

  15. Privacy Policies for Apps Targeted Toward Youth: Descriptive Analysis of Readability

    PubMed Central

    Das, Gitanjali; Cheung, Cynthia; Nebeker, Camille; Bietz, Matthew

    2018-01-01

    Background Due to the growing availability of consumer information, the protection of personal data is of increasing concern. Objective We assessed readability metrics of privacy policies for apps that are either available to or targeted toward youth to inform strategies to educate and protect youth from unintentional sharing of personal data. Methods We reviewed the 1200 highest ranked apps from the Apple and Google Play Stores and systematically selected apps geared toward youth. After applying exclusion criteria, 99 highly ranked apps geared toward minors remained, 64 of which had a privacy policy. We obtained and analyzed these privacy policies using reading grade level (RGL) as a metric. Policies were further compared as a function of app category (free vs paid; entertainment vs social networking vs utility). Results Analysis of privacy policies for these 64 apps revealed an average RGL of 12.78, which is well above the average reading level (8.0) of adults in the United States. There was also a small but statistically significant difference in word count as a function of app category (entertainment: 2546 words, social networking: 3493 words, and utility: 1038 words; P=.02). Conclusions Although users must agree to privacy policies to access digital tools and products, readability analyses suggest that these agreements are not comprehensible to most adults, let alone youth. We propose that stakeholders, including pediatricians and other health care professionals, play a role in educating youth and their guardians about the use of Web-based services and potential privacy risks, including the unintentional sharing of personal data. PMID:29301737

  16. Seeing and hearing a word: combining eye and ear is more efficient than combining the parts of a word.

    PubMed

    Dubois, Matthieu; Poeppel, David; Pelli, Denis G

    2013-01-01

    To understand why human sensitivity for complex objects is so low, we study how word identification combines eye and ear or parts of a word (features, letters, syllables). Our observers identify printed and spoken words presented concurrently or separately. When researchers measure threshold (energy of the faintest visible or audible signal) they may report either sensitivity (one over the human threshold) or efficiency (ratio of the best possible threshold to the human threshold). When the best possible algorithm identifies an object (like a word) in noise, its threshold is independent of how many parts the object has. But, with human observers, efficiency depends on the task. In some tasks, human observers combine parts efficiently, needing hardly more energy to identify an object with more parts. In other tasks, they combine inefficiently, needing energy nearly proportional to the number of parts, over a 60∶1 range. Whether presented to eye or ear, efficiency for detecting a short sinusoid (tone or grating) with few features is a substantial 20%, while efficiency for identifying a word with many features is merely 1%. Why? We show that the low human sensitivity for words is a cost of combining their many parts. We report a dichotomy between inefficient combining of adjacent features and efficient combining across senses. Joining our results with a survey of the cue-combination literature reveals that cues combine efficiently only if they are perceived as aspects of the same object. Observers give different names to adjacent letters in a word, and combine them inefficiently. Observers give the same name to a word's image and sound, and combine them efficiently. The brain's machinery optimally combines only cues that are perceived as originating from the same object. Presumably such cues each find their own way through the brain to arrive at the same object representation.

  17. Disparities in Life Course Outcomes for Transition-Aged Youth with Disabilities.

    PubMed

    Acharya, Kruti; Meza, Regina; Msall, Michael E

    2017-10-01

    Close to 750,000 youth with special health care needs transition to adult health care in the United States every year; however, less than one-half receive transition-planning services. Using the "F-words" organizing framework, this article explores life course outcomes and disparities in transition-aged youth with disabilities, with a special focus on youth with autism, Down syndrome, and cerebral palsy. Despite the importance of transition, a review of the available literature revealed that (1) youth with disabilities continue to have poor outcomes in all six "F-words" domains (ie, function, family, fitness, fun, friends, and future) and (2) transition outcomes vary by race/ethnicity and disability. Professionals need to adopt a holistic framework to examine transition outcomes within a broader social-ecological context, as well as implement evidence-based transition practices to help improve postsecondary outcomes of youth with disabilities. [Pediatr Ann. 2017;46(10):e371-e376.]. Copyright 2017, SLACK Incorporated.

  18. The Potential of Automatic Word Comparison for Historical Linguistics.

    PubMed

    List, Johann-Mattis; Greenhill, Simon J; Gray, Russell D

    2017-01-01

    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection-although not perfect-could become an important component of future research in historical linguistics.

  19. The Potential of Automatic Word Comparison for Historical Linguistics

    PubMed Central

    Greenhill, Simon J.; Gray, Russell D.

    2017-01-01

    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection—although not perfect—could become an important component of future research in historical linguistics. PMID:28129337

  20. Processing Electromyographic Signals to Recognize Words

    NASA Technical Reports Server (NTRS)

    Jorgensen, C. C.; Lee, D. D.

    2009-01-01

    A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.

  1. Youth in Recovery

    ERIC Educational Resources Information Center

    de Miranda, John; Williams, Greg

    2011-01-01

    Young people are entering long-term recovery probably in greater numbers than ever before. A key word here is "probably" because we know precious little about the phenomenon of young people who recover from alcohol and drug addition. This article is a preliminary exploration of youth in recovery. It reviews several types of recovery support…

  2. "Now We Have Spoken."

    ERIC Educational Resources Information Center

    Zimmer, Patricia Moore

    2001-01-01

    Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…

  3. Word Recognition and Vocabulary Understanding Strategies for Literacy Success. Bill Harp Professional Teachers Library.

    ERIC Educational Resources Information Center

    Sinatra, Richard

    This book lets readers see how children and youth learn words in the oral and written languages--and how teachers can best assist learners in the understanding, reading, and writing of words for successful literacy development. In the book teachers learn the differing rationales for using sound/symbol or phonics approaches in word learning, for…

  4. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences.

    PubMed

    Koeritzer, Margaret A; Rogers, Chad S; Van Engen, Kristin J; Peelle, Jonathan E

    2018-03-15

    The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. https://doi.org/10.23641/asha.5848059.

  5. Analysis of word number and content in discourse of patients with mild to moderate Alzheimer's disease.

    PubMed

    de Lira, Juliana Onofre; Minett, Thaís Soares Cianciarullo; Bertolucci, Paulo Henrique Ferreira; Ortiz, Karin Zazo

    2014-01-01

    Alzheimer's disease (AD) is characterized by impairments in memory and other cognitive functions such as language, which can be affected in all aspects including discourse. A picture description task is considered an effective way of obtaining a discourse sample whose key feature is the ability to retrieve appropriate lexical items. There is no consensus on findings showing that performance in content processing of spoken discourse deteriorates from the mildest phase of AD. To compare the quantity and quality of discourse among patients with mild to moderate AD and controls. A cross-sectional study was designed. Subjects aged 50 years and older of both sexes, with one year or more of education, were divided into three groups: control (CG), mild AD (ADG1) and moderate AD (ADG2). Participants were asked to describe the "cookie theft" picture. The total number of complete words spoken and information units (IU) were included in the analysis. There was no significant difference among groups in terms of age, schooling and sex. For number of words spoken, the CG performed significantly better than both the ADG 1 and ADG2, but no difference between the two latter groups was found. CG produced almost twice as many information units as the ADG1 and more than double that of the ADG2. Moreover, ADG2 patients had worse performance on IUs compared to the ADG1. Decreased performance in quantity and content of discourse was evident in patients with AD from the mildest phase, but only content (IU) continued to worsen with disease progression.

  6. Analysis of word number and content in discourse of patients with mild to moderate Alzheimer's disease

    PubMed Central

    de Lira, Juliana Onofre; Minett, Thaís Soares Cianciarullo; Bertolucci, Paulo Henrique Ferreira; Ortiz, Karin Zazo

    2014-01-01

    Alzheimer's disease (AD) is characterized by impairments in memory and other cognitive functions such as language, which can be affected in all aspects including discourse. A picture description task is considered an effective way of obtaining a discourse sample whose key feature is the ability to retrieve appropriate lexical items. There is no consensus on findings showing that performance in content processing of spoken discourse deteriorates from the mildest phase of AD. Objective To compare the quantity and quality of discourse among patients with mild to moderate AD and controls. Methods A cross-sectional study was designed. Subjects aged 50 years and older of both sexes, with one year or more of education, were divided into three groups: control (CG), mild AD (ADG1) and moderate AD (ADG2). Participants were asked to describe the "cookie theft" picture. The total number of complete words spoken and information units (IU) were included in the analysis. Results There was no significant difference among groups in terms of age, schooling and sex. For number of words spoken, the CG performed significantly better than both the ADG 1 and ADG2, but no difference between the two latter groups was found. CG produced almost twice as many information units as the ADG1 and more than double that of the ADG2. Moreover, ADG2 patients had worse performance on IUs compared to the ADG1. Conclusion Decreased performance in quantity and content of discourse was evident in patients with AD from the mildest phase, but only content (IU) continued to worsen with disease progression. PMID:29213912

  7. Acoustic Masking Disrupts Time-Dependent Mechanisms of Memory Encoding in Word-List Recall

    PubMed Central

    Cousins, Katheryn A.Q.; Dar, Jonathan; Wingfield, Arthur; Miller, Paul

    2013-01-01

    Recall of recently heard words is affected by the clarity of presentation: even if all words are presented with sufficient clarity for successful recognition, those that are more difficult to hear are less likely to be recalled. Such a result demonstrates that memory processing depends on more than whether a word is simply “recognized” versus “not-recognized”. More surprising is that when a single item in a list of spoken words is acoustically masked, prior words that were heard with full clarity are also less likely to be recalled. To account for such a phenomenon, we developed the Linking by Active Maintenance Model (LAMM). This computational model of perception and encoding predicts that these effects are time dependent. Here we challenge our model by investigating whether and how the impact of acoustic masking on memory depends on presentation rate. We find that a slower presentation rate causes a more disruptive impact of stimulus degradation on prior, clearly heard words than does a fast rate. These results are unexpected according to prior theories of effortful listening, but we demonstrate that they can be accounted for by LAMM. PMID:24838269

  8. Acoustic masking disrupts time-dependent mechanisms of memory encoding in word-list recall.

    PubMed

    Cousins, Katheryn A Q; Dar, Hayim; Wingfield, Arthur; Miller, Paul

    2014-05-01

    Recall of recently heard words is affected by the clarity of presentation: Even if all words are presented with sufficient clarity for successful recognition, those that are more difficult to hear are less likely to be recalled. Such a result demonstrates that memory processing depends on more than whether a word is simply "recognized" versus "not recognized." More surprising is that, when a single item in a list of spoken words is acoustically masked, prior words that were heard with full clarity are also less likely to be recalled. To account for such a phenomenon, we developed the linking-by-active-maintenance model (LAMM). This computational model of perception and encoding predicts that these effects will be time dependent. Here we challenged our model by investigating whether and how the impact of acoustic masking on memory depends on presentation rate. We found that a slower presentation rate causes a more disruptive impact of stimulus degradation on prior, clearly heard words than does a fast rate. These results are unexpected according to prior theories of effortful listening, but we demonstrated that they can be accounted for by LAMM.

  9. Important considerations in lesion-symptom mapping: Illustrations from studies of word comprehension.

    PubMed

    Shahid, Hinna; Sebastian, Rajani; Schnur, Tatiana T; Hanayik, Taylor; Wright, Amy; Tippett, Donna C; Fridriksson, Julius; Rorden, Chris; Hillis, Argye E

    2017-06-01

    Lesion-symptom mapping is an important method of identifying networks of brain regions critical for functions. However, results might be influenced substantially by the imaging modality and timing of assessment. We tested the hypothesis that brain regions found to be associated with acute language deficits depend on (1) timing of behavioral measurement, (2) imaging sequences utilized to define the "lesion" (structural abnormality only or structural plus perfusion abnormality), and (3) power of the study. We studied 191 individuals with acute left hemisphere stroke with MRI and language testing to identify areas critical for spoken word comprehension. We use the data from this study to examine the potential impact of these three variables on lesion-symptom mapping. We found that only the combination of structural and perfusion imaging within 48 h of onset identified areas where more abnormal voxels was associated with more severe acute deficits, after controlling for lesion volume and multiple comparisons. The critical area identified with this methodology was the left posterior superior temporal gyrus, consistent with other methods that have identified an important role of this area in spoken word comprehension. Results have implications for interpretation of other lesion-symptom mapping studies, as well as for understanding areas critical for auditory word comprehension in the healthy brain. We propose that lesion-symptom mapping at the acute stage of stroke addresses a different sort of question about brain-behavior relationships than lesion-symptom mapping at the chronic stage, but that timing of behavioral measurement and imaging modalities should be considered in either case. Hum Brain Mapp 38:2990-3000, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Seeing and Hearing a Word: Combining Eye and Ear Is More Efficient than Combining the Parts of a Word

    PubMed Central

    Dubois, Matthieu; Poeppel, David; Pelli, Denis G.

    2013-01-01

    To understand why human sensitivity for complex objects is so low, we study how word identification combines eye and ear or parts of a word (features, letters, syllables). Our observers identify printed and spoken words presented concurrently or separately. When researchers measure threshold (energy of the faintest visible or audible signal) they may report either sensitivity (one over the human threshold) or efficiency (ratio of the best possible threshold to the human threshold). When the best possible algorithm identifies an object (like a word) in noise, its threshold is independent of how many parts the object has. But, with human observers, efficiency depends on the task. In some tasks, human observers combine parts efficiently, needing hardly more energy to identify an object with more parts. In other tasks, they combine inefficiently, needing energy nearly proportional to the number of parts, over a 60∶1 range. Whether presented to eye or ear, efficiency for detecting a short sinusoid (tone or grating) with few features is a substantial 20%, while efficiency for identifying a word with many features is merely 1%. Why? We show that the low human sensitivity for words is a cost of combining their many parts. We report a dichotomy between inefficient combining of adjacent features and efficient combining across senses. Joining our results with a survey of the cue-combination literature reveals that cues combine efficiently only if they are perceived as aspects of the same object. Observers give different names to adjacent letters in a word, and combine them inefficiently. Observers give the same name to a word’s image and sound, and combine them efficiently. The brain’s machinery optimally combines only cues that are perceived as originating from the same object. Presumably such cues each find their own way through the brain to arrive at the same object representation. PMID:23734220

  11. Privacy Policies for Apps Targeted Toward Youth: Descriptive Analysis of Readability.

    PubMed

    Das, Gitanjali; Cheung, Cynthia; Nebeker, Camille; Bietz, Matthew; Bloss, Cinnamon

    2018-01-04

    Due to the growing availability of consumer information, the protection of personal data is of increasing concern. We assessed readability metrics of privacy policies for apps that are either available to or targeted toward youth to inform strategies to educate and protect youth from unintentional sharing of personal data. We reviewed the 1200 highest ranked apps from the Apple and Google Play Stores and systematically selected apps geared toward youth. After applying exclusion criteria, 99 highly ranked apps geared toward minors remained, 64 of which had a privacy policy. We obtained and analyzed these privacy policies using reading grade level (RGL) as a metric. Policies were further compared as a function of app category (free vs paid; entertainment vs social networking vs utility). Analysis of privacy policies for these 64 apps revealed an average RGL of 12.78, which is well above the average reading level (8.0) of adults in the United States. There was also a small but statistically significant difference in word count as a function of app category (entertainment: 2546 words, social networking: 3493 words, and utility: 1038 words; P=.02). Although users must agree to privacy policies to access digital tools and products, readability analyses suggest that these agreements are not comprehensible to most adults, let alone youth. We propose that stakeholders, including pediatricians and other health care professionals, play a role in educating youth and their guardians about the use of Web-based services and potential privacy risks, including the unintentional sharing of personal data. ©Gitanjali Das, Cynthia Cheung, Camille Nebeker, Matthew Bietz, Cinnamon Bloss. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.01.2018.

  12. Attention for speaking: domain-general control from the anterior cingulate cortex in spoken word production

    PubMed Central

    Piai, Vitória; Roelofs, Ardi; Acheson, Daniel J.; Takashima, Atsuko

    2013-01-01

    Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control. PMID:24368899

  13. Strength of word-specific neural memory traces assessed electrophysiologically.

    PubMed

    Alexandrov, Alexander A; Boricheva, Daria O; Pulvermüller, Friedemann; Shtyrov, Yury

    2011-01-01

    Memory traces for words are frequently conceptualized neurobiologically as networks of neurons interconnected via reciprocal links developed through associative learning in the process of language acquisition. Neurophysiological reflection of activation of such memory traces has been reported using the mismatch negativity brain potential (MMN), which demonstrates an enhanced response to meaningful words over meaningless items. This enhancement is believed to be generated by the activation of strongly intraconnected long-term memory circuits for words that can be automatically triggered by spoken linguistic input and that are absent for unfamiliar phonological stimuli. This conceptual framework critically predicts different amounts of activation depending on the strength of the word's lexical representation in the brain. The frequent use of words should lead to more strongly connected representations, whereas less frequent items would be associated with more weakly linked circuits. A word with higher frequency of occurrence in the subject's language should therefore lead to a more pronounced lexical MMN response than its low-frequency counterpart. We tested this prediction by comparing the event-related potentials elicited by low- and high-frequency words in a passive oddball paradigm; physical stimulus contrasts were kept identical. We found that, consistent with our prediction, presenting the high-frequency stimulus led to a significantly more pronounced MMN response relative to the low-frequency one, a finding that is highly similar to previously reported MMN enhancement to words over meaningless pseudowords. Furthermore, activation elicited by the higher-frequency word peaked earlier relative to low-frequency one, suggesting more rapid access to frequently used lexical entries. These results lend further support to the above view on word memory traces as strongly connected assemblies of neurons. The speed and magnitude of their activation appears to be linked to

  14. Word naming times and psycholinguistic norms for Italian nouns.

    PubMed

    Barca, Laura; Burani, Cristina; Arduino, Lisa S

    2002-08-01

    The present study describes normative measures for 626 Italian simple nouns. The database (LEXVAR.XLS) is freely available for down-loading on the Web site http://wwwistc.ip.rm.cnr.it/materia/database/. For each of the 626 nouns, values for the following variables are reported: age of acquisition, familiarity, imageability, concreteness, adult written frequency, child written frequency, adult spoken frequency, number of orthographic neighbors, mean bigram frequency, length in syllables, and length in letters. A classification of lexical stress and of the type of word-initial phoneme is also provided. The intercorrelations among the variables, a factor analysis, and the effects of variables and of the extracted factors on word naming are reported. Naming latencies were affected primarily by a factor including word length and neighborhood size and by a word frequency factor. Neither a semantic factor including imageability, concreteness, and age of acquisition nor a factor defined by mean bigram frequency had significant effects on pronunciation times. These results hold for a language with shallow orthography, like Italian, for which lexical nonsemantic properties have been shown to affect reading aloud. These norms are useful in a variety of research areas involving the manipulation and control of stimulus attributes.

  15. Children's Use of Evaluative Devices in Spoken and Written Narratives

    ERIC Educational Resources Information Center

    Drijbooms, Elise; Groen, Margriet A.; Verhoeven, Ludo

    2017-01-01

    This study investigated the development of evaluation in narratives from middle to late childhood, within the context of differentiating between spoken and written modalities. Two parallel forms of a picture story were used to elicit spoken and written narratives from fourth- and sixth-graders. It was expected that, in addition to an increase of…

  16. "Stay with Your Words": Indigenous Youth, Local Policy, and the Work of Language Fortification

    ERIC Educational Resources Information Center

    Huaman, Elizabeth Sumida; Martin, Nathan D.; Chosa, Carnell T.

    2016-01-01

    This article focuses on the work of cultural and language maintenance and fortification with Indigenous youth populations. Here, the idea of work represents two strands of thought: first, research that is partnered with Indigenous youth-serving institutions and that prioritizes Indigenous youth perspectives; and second, the work of cultural and…

  17. Individual language experience modulates rapid formation of cortical memory circuits for novel words

    PubMed Central

    Kimppa, Lilli; Kujala, Teija; Shtyrov, Yury

    2016-01-01

    Mastering multiple languages is an increasingly important ability in the modern world; furthermore, multilingualism may affect human learning abilities. Here, we test how the brain’s capacity to rapidly form new representations for spoken words is affected by prior individual experience in non-native language acquisition. Formation of new word memory traces is reflected in a neurophysiological response increase during a short exposure to novel lexicon. Therefore, we recorded changes in electrophysiological responses to phonologically native and non-native novel word-forms during a perceptual learning session, in which novel stimuli were repetitively presented to healthy adults in either ignore or attend conditions. We found that larger number of previously acquired languages and earlier average age of acquisition (AoA) predicted greater response increase to novel non-native word-forms. This suggests that early and extensive language experience is associated with greater neural flexibility for acquiring novel words with unfamiliar phonology. Conversely, later AoA was associated with a stronger response increase for phonologically native novel word-forms, indicating better tuning of neural linguistic circuits to native phonology. The results suggest that individual language experience has a strong effect on the neural mechanisms of word learning, and that it interacts with the phonological familiarity of the novel lexicon. PMID:27444206

  18. Enhancing the Application and Evaluation of a Discrete Trial Intervention Package for Eliciting First Words in Preverbal Preschoolers with ASD

    ERIC Educational Resources Information Center

    Tsiouri, Ioanna; Simmons, Elizabeth Schoen; Paul, Rhea

    2012-01-01

    This study evaluates the effectiveness of an intervention package including a discrete trial program (Rapid Motor Imitation Antecedent Training (Tsiouri and Greer, "J Behav Educat" 12:185-206, 2003) combined with parent education for eliciting first words in children with ASD who had little or no spoken language. Evaluation of the approach…

  19. Lexical Tone Variation and Spoken Word Recognition in Preschool Children: Effects of Perceptual Salience

    ERIC Educational Resources Information Center

    Singh, Leher; Tan, Aloysia; Wewalaarachchi, Thilanga D.

    2017-01-01

    Children undergo gradual progression in their ability to differentiate correct and incorrect pronunciations of words, a process that is crucial to establishing a native vocabulary. For the most part, the development of mature phonological representations has been researched by investigating children's sensitivity to consonant and vowel variation,…

  20. Spoken Grammar: Where Are We and Where Are We Going?

    ERIC Educational Resources Information Center

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  1. Terminating Devices in Spoken French.

    ERIC Educational Resources Information Center

    Andrews, Barry J.

    1989-01-01

    A study examines the way in which one group of discourse connectors, terminators, function in contemporary spoken French. Three types of terminators, elements used at the end of an utterance or section to indicate its completion, are investigated, including utterance terminators, interrogative tags, and terminal tags. (Author/MSE)

  2. Non-intentional but not automatic: reduction of word- and arrow-based compatibility effects by sound distractors in the same categorical domain.

    PubMed

    Miles, James D; Proctor, Robert W

    2009-10-01

    In the current study, we show that the non-intentional processing of visually presented words and symbols can be attenuated by sounds. Importantly, this attenuation is dependent on the similarity in categorical domain between the sounds and words or symbols. Participants performed a task in which left or right responses were made contingent on the color of a centrally presented target that was either a location word (LEFT or RIGHT) or a left or right arrow. Responses were faster when they were on the side congruent with the word or arrow. This bias was reduced for location words by a neutral spoken word and for arrows by a tone series, but not vice versa. We suggest that words and symbols are processed with minimal attentional requirements until they are categorized into specific knowledge domains, but then become sensitive to other information within the same domain regardless of the similarity between modalities.

  3. ERP evidence for implicit L2 word stress knowledge in listeners of a fixed-stress language.

    PubMed

    Kóbor, Andrea; Honbolygó, Ferenc; Becker, Angelika B C; Schild, Ulrike; Csépe, Valéria; Friedrich, Claudia K

    2018-06-01

    Languages with contrastive stress, such as English or German, distinguish some words only via the stress status of their syllables, such as "CONtent" and "conTENT" (capitals indicate a stressed syllable). Listeners with a fixed-stress native language, such as Hungarian, have difficulties in explicitly discriminating variation of the stress position in a second language (L2). However, Event-Related Potentials (ERPs) indicate that Hungarian listeners implicitly notice variation from their native fixed-stress pattern. Here we used ERPs to investigate Hungarian listeners' implicit L2 processing. In a cross-modal word fragment priming experiment, we presented spoken stressed and unstressed German word onsets (primes) followed by printed versions of initially stressed and initially unstressed German words (targets). ERPs reflected stress priming exerted by both prime types. This indicates that Hungarian listeners implicitly linked German words with the stress status of the primes. Thus, the formerly described explicit stress discrimination difficulty associated with a fixed-stress native language does not generalize to implicit aspects of L2 word stress processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention

    ERIC Educational Resources Information Center

    Medwetsky, Larry

    2011-01-01

    Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…

  5. Domain-specific and domain-general constraints on word and sequence learning.

    PubMed

    Archibald, Lisa M D; Joanisse, Marc F

    2013-02-01

    The relative influences of language-related and memory-related constraints on the learning of novel words and sequences were examined by comparing individual differences in performance of children with and without specific deficits in either language or working memory. Children recalled lists of words in a Hebbian learning protocol in which occasional lists repeated, yielding improved recall over the course of the task on the repeated lists. The task involved presentation of pictures of common nouns followed immediately by equivalent presentations of the spoken names. The same participants also completed a paired-associate learning task involving word-picture and nonword-picture pairs. Hebbian learning was observed for all groups. Domain-general working memory constrained immediate recall, whereas language abilities impacted recall in the auditory modality only. In addition, working memory constrained paired-associate learning generally, whereas language abilities disproportionately impacted novel word learning. Overall, all of the learning tasks were highly correlated with domain-general working memory. The learning of nonwords was additionally related to general intelligence, phonological short-term memory, language abilities, and implicit learning. The results suggest that distinct associations between language- and memory-related mechanisms support learning of familiar and unfamiliar phonological forms and sequences.

  6. Spoken language outcomes after hemispherectomy: factoring in etiology.

    PubMed

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  7. Spoken Grammar Awareness Raising: Does It Affect the Listening Ability of Iranian EFL Learners?

    ERIC Educational Resources Information Center

    Rashtchi, Mojgan; Afzali, Mahnaz

    2011-01-01

    Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and…

  8. Recall of English Function Words and Inflections by Skilled and Average Deaf Readers.

    ERIC Educational Resources Information Center

    Kelly, Leonard P.

    1993-01-01

    The performance of 17 youth on a verbatim recall task indicated that skilled deaf readers are more able than average deaf readers to sustain a record of English function words and inflections. The relative speed of skilled readers when making lexical decisions about phonologically similar word pairs indicated greater access to phonological…

  9. Selective attention to phonology dynamically modulates initial encoding of auditory words within the left hemisphere.

    PubMed

    Yoncheva, Yuliya; Maurer, Urs; Zevin, Jason D; McCandliss, Bruce D

    2014-08-15

    Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may

  10. Selective attention to phonology dynamically modulates initial encoding of auditory words within the left hemisphere

    PubMed Central

    Yoncheva; Maurer, Urs; Zevin, Jason; McCandliss, Bruce

    2015-01-01

    Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective atten tion to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by ma nipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data- driven source localization analyses revealed that selective attention to phonology led to significantly greater re cruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings support the key role of selective attention to phonology in the development of literacy and motivate future research on the neural bases of the interaction between phonological

  11. When Diglossia Meets Dyslexia: The Effect of Diglossia on Voweled and Unvoweled Word Reading among Native Arabic-Speaking Dyslexic Children

    ERIC Educational Resources Information Center

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2017-01-01

    Native Arabic speakers read in a language variety that is different from the one they use for everyday speech. The aim of the present study was: (1) to examine Spoken Arabic (SpA) and Standard Arabic (StA) voweled and unvoweled word reading among native-speaking sixth graders with developmental dyslexia; and (2) to determine whether SpA reading…

  12. Effects of blocking and presentation on the recognition of word and nonsense syllables in noise

    NASA Astrophysics Data System (ADS)

    Benkí, José R.

    2003-10-01

    Listener expectations may have significant effects on spoken word recognition, modulating word similarity effects from the lexicon. This study investigates the effect of blocking by lexical status on the recognition of word and nonsense syllables in noise. 240 phonemically matched word and nonsense CVC syllables [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84, 101-108 (1988)] were presented to listeners at different S/N ratios for identification. In the mixed condition, listeners were presented with blocks containing both words and nonwords, while listeners in the blocked condition were presented with the trials in blocks containing either words or nonwords. The targets were presented in isolation with 50 ms of preceding and following noise. Preliminary results indicate no effect of blocking on accuracy for either word or nonsense syllables; results from neighborhood density analyses will be presented. Consistent with previous studies, a j-factor analysis indicates that words are perceived as containing at least 0.5 fewer independent units than nonwords in both conditions. Relative to previous work on syllables presented in a frame sentence [Benkí, J. Acoust. Soc. Am. 113, 1689-1705 (2003)], initial consonants were perceived significantly less accurately, while vowels and final consonants were perceived at comparable rates.

  13. Effect of training on word-recognition performance in noise for young normal-hearing and older hearing-impaired listeners.

    PubMed

    Burk, Matthew H; Humes, Larry E; Amos, Nathan E; Strauser, Lauren E

    2006-06-01

    The objective of this study was to evaluate the effectiveness of a training program for hearing-impaired listeners to improve their speech-recognition performance within a background noise when listening to amplified speech. Both noise-masked young normal-hearing listeners, used to model the performance of elderly hearing-impaired listeners, and a group of elderly hearing-impaired listeners participated in the study. Of particular interest was whether training on an isolated word list presented by a standardized talker can generalize to everyday speech communication across novel talkers. Word-recognition performance was measured for both young normal-hearing (n = 16) and older hearing-impaired (n = 7) adults. Listeners were trained on a set of 75 monosyllabic words spoken by a single female talker over a 9- to 14-day period. Performance for the familiar (trained) talker was measured before and after training in both open-set and closed-set response conditions. Performance on the trained words of the familiar talker were then compared with those same words spoken by three novel talkers and to performance on a second set of untrained words presented by both the familiar and unfamiliar talkers. The hearing-impaired listeners returned 6 mo after their initial training to examine retention of the trained words as well as their ability to transfer any knowledge gained from word training to sentences containing both trained and untrained words. Both young normal-hearing and older hearing-impaired listeners performed significantly better on the word list in which they were trained versus a second untrained list presented by the same talker. Improvements on the untrained words were small but significant, indicating some generalization to novel words. The large increase in performance on the trained words, however, was maintained across novel talkers, pointing to the listener's greater focus on lexical memorization of the words rather than a focus on talker-specific acoustic

  14. Nurturing a lexical legacy: reading experience is critical for the development of word reading skill

    NASA Astrophysics Data System (ADS)

    Nation, Kate

    2017-12-01

    The scientific study of reading has taught us much about the beginnings of reading in childhood, with clear evidence that the gateway to reading opens when children are able to decode, or `sound out' written words. Similarly, there is a large evidence base charting the cognitive processes that characterise skilled word recognition in adults. Less understood is how children develop word reading expertise. Once basic reading skills are in place, what factors are critical for children to move from novice to expert? This paper outlines the role of reading experience in this transition. Encountering individual words in text provides opportunities for children to refine their knowledge about how spelling represents spoken language. Alongside this, however, reading experience provides much more than repeated exposure to individual words in isolation. According to the lexical legacy perspective, outlined in this paper, experiencing words in diverse and meaningful language environments is critical for the development of word reading skill. At its heart is the idea that reading provides exposure to words in many different contexts, episodes and experiences which, over time, sum to a rich and nuanced database about their lexical history within an individual's experience. These rich and diverse encounters bring about local variation at the word level: a lexical legacy that is measurable during word reading behaviour, even in skilled adults.

  15. Resettlement experiences and resilience in refugee youth in Perth, Western Australia.

    PubMed

    Earnest, Jaya; Mansi, Ruth; Bayati, Sara; Earnest, Joel Anthony; Thompson, Sandra C

    2015-06-10

    In Australia, the two major pathways of refugee entry are the United Nations High Commissioner for Refugees resettlement programme and irregular maritime arrivals (IMAs) seeking asylum. The Australian Government's policies towards IMAs since July 2013 are controversial, uncompromising and consistently harsh, with asylum seekers held in detention centres for prolonged periods. Refugees and asylum seekers have distinct and unique stressors that make resettlement difficult. This exploratory study examines resettlement experiences for refugee youth in Western Australia using the psychosocial conceptual framework and qualitative methods. Focus group discussions and key informant interviews were undertaken with verbatim transcripts analysed using thematic analysis to identify themes. Themes documented that language and its impact, and experience with education, health, and social activities, support structures provided to youth and supporting future aspirations as critical to successful resettlement. This exploratory study contributes to developing a broader understanding of the resettlement experiences of refugee youth, drawing on their current and past experiences, cultural differences and mechanisms for coping. Fluency in English language, especially spoken, was a facilitator of successful resettlement. Our results align with previous studies documenting that support programs are vital for successful resettlement. Although faced with immense difficulties refugee youth are resilient, want to succeed and have aspirations for the future. Strategies and recommendations suggested by refugee youth themselves could be used for developing interventions to assist successful resettlement.

  16. The Listening and Spoken Language Data Repository: Design and Project Overview

    ERIC Educational Resources Information Center

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  17. The Exception Does Not Rule: Attention Constrains Form Preparation in Word Production

    PubMed Central

    O’Séaghdha, Pádraig G.; Frazer, Alexandra K.

    2014-01-01

    Form preparation in word production, the benefit of exploiting a useful common sound (such as the first phoneme) of iteratively spoken small groups of words, is notoriously fastidious, exhibiting a seemingly categorical, all-or-none character, and a corresponding susceptibility to ‘killers’ of preparation. In particular, the presence of a single exception item in a group of otherwise phonologically consistent words has been found to eliminate the benefit of knowing a majority characteristic. This has been interpreted to mean that form preparation amounts to partial production, and thus provides a window on fundamental processes of phonological word encoding (e.g., Levelt et al., 1999). However, preparation of only fully distributed properties appears to be non-optimal, and is difficult to reconcile with the sensitivity of cognitive responses to probabilities in other domains. We show here that the all-or-none characteristic of form preparation is specific to task format. Preparation for sets that included an exception item occurred in ecologically valid production tasks, picture naming (Experiment 1), and word naming (Experiment 2). Preparation failed only in the commonly used, but indirect and resource-intensive, associative cuing task (Experiment 3). We outline an account of form preparation in which anticipation of word-initial phonological fragments uses a limited capacity, sustained attentional capability that points to rather than enacts possibilities for imminent speech. PMID:24548328

  18. The Frequency and Functions of "Just" in British Academic Spoken English

    ERIC Educational Resources Information Center

    Grant, Lynn E.

    2011-01-01

    This study investigates the frequency and functions of "just" in British academic spoken English. It adopts the meanings of "just" established by Lindemann and Mauranen, 2001, taken from the occurrences of "just" across five speech events in the Michigan Corpus of Academic Spoken English (MICASE) to see if they also apply to occurrences of "just"…

  19. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  20. A picture's worth a thousand words: engaging youth in CBPR using the creative arts.

    PubMed

    Yonas, Michael A; Burke, Jessica G; Rak, Kimberly; Bennett, Antoine; Kelly, Vera; Gielen, Andrea C

    2009-01-01

    Engaging youth and incorporating their unique expertise into the research process is important when addressing issues related to their health. Visual Voices is an arts-based participatory data collection method designed to work together with young people and communities to collaboratively elicit, examine, and celebrate the perspectives of youth. To present a process for using the creative arts with young people as a participatory data collection method and to give examples of their perspectives on safety and violence. Using the creative arts, this study examined and illustrates the perspectives of how community factors influence safety and violence. Visual Voices was conducted with a total of 22 African-American youth in two urban neighborhoods. This method included creative arts-based writing, drawing, and painting activities designed to yield culturally relevant data generated and explored by youth. Qualitative data were captured through the creative content of writings, drawings, and paintings created by the youths as well as transcripts from audio recorded group discussion. Data was analyzed for thematic content and triangulated across traditional and nontraditional mediums. Findings were interpreted with participants and shared publicly for further reflection and utilization. The youth participants identified a range of issues related to community factors, community safety, and violence. Such topics included the role of schools and social networks within the community as safe places and corner stores and abandoned houses as unsafe places. Visual Voices is a creative research method that provides a unique opportunity for youth to generate a range of ideas through access to the multiple creative methods provided. It is an innovative process that generates rich and valuable data about topics of interest and the lived experiences of young community members.

  1. The Penefit of Salience: Salient Accented, but Not Unaccented Words Reveal Accent Adaptation Effects.

    PubMed

    Grohe, Ann-Kathrin; Weber, Andrea

    2016-01-01

    In two eye-tracking experiments, the effects of salience in accent training and speech accentedness on spoken-word recognition were investigated. Salience was expected to increase a stimulus' prominence and therefore promote learning. A training-test paradigm was used on native German participants utilizing an artificial German accent. Salience was elicited by two different criteria: production and listening training as a subjective criterion and accented (Experiment 1) and canonical test words (Experiment 2) as an objective criterion. During training in Experiment 1, participants either read single German words out loud and deliberately devoiced initial voiced stop consonants (e.g., Balken-"beam" pronounced as (*) Palken), or they listened to pre-recorded words with the same accent. In a subsequent eye-tracking experiment, looks to auditorily presented target words with the accent were analyzed. Participants from both training conditions fixated accented target words more often than a control group without training. Training was identical in Experiment 2, but during test, canonical German words that overlapped in onset with the accented words from training were presented as target words (e.g., Palme-"palm tree" overlapped in onset with the training word (*) Palken) rather than accented words. This time, no training effect was observed; recognition of canonical word forms was not affected by having learned the accent. Therefore, accent learning was only visible when the accented test tokens in Experiment 1, which were not included in the test of Experiment 2, possessed sufficient salience based on the objective criterion "accent." These effects were not modified by the subjective criterion of salience from the training modality.

  2. Mapping Students' Spoken Conceptions of Equality

    ERIC Educational Resources Information Center

    Anakin, Megan

    2013-01-01

    This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…

  3. In a Manner of Speaking: Assessing Frequent Spoken Figurative Idioms to Assist ESL/EFL Teachers

    ERIC Educational Resources Information Center

    Grant, Lynn E.

    2007-01-01

    This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…

  4. The role of tone and segmental information in visual-word recognition in Thai.

    PubMed

    Winskel, Heather; Ratitamkul, Theeraporn; Charoensit, Akira

    2017-07-01

    Tone languages represent a large proportion of the spoken languages of the world and yet lexical tone is understudied. Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. We used colour words and their orthographic neighbours as stimuli to investigate facilitation (Experiment 1) and interference (Experiment 2) Stroop effects. Five experimental conditions were created: (a) the colour word (e.g., ขาว /k h ã:w/ [white]), (b) tone different word (e.g., ข่าว /k h à:w/[news]), (c) initial consonant phonologically same word (e.g., คาว /k h a:w/ [fishy]), where the initial consonant of the word was phonologically the same but orthographically different, (d) initial consonant different, tone same word (e.g., หาว /hã:w/ yawn), where the initial consonant was orthographically different but the tone of the word was the same, and (e) initial consonant different, tone different word (e.g., กาว /ka:w/ glue), where the initial consonant was orthographically different, and the tone was different. In order to examine whether tone information per se had a facilitative effect, we also included a colour congruent word condition where the segmental (S) information was different but the tone (T) matched the colour word (S-T+) in Experiment 2. Facilitation/interference effects were found for all five conditions when compared with a neutral control word. Results of the critical comparisons revealed that tone information comes into play at a later stage in lexical processing, and orthographic information contributes more than phonological information.

  5. The Development of Spoken Language in Deaf Children: Explaining the Unexplained Variance.

    ERIC Educational Resources Information Center

    Musselman, Carol; Kircaali-Iftar, Gonul

    1996-01-01

    This study compared 20 young deaf children with either exceptionally good or exceptionally poor spoken language for their hearing loss, age, and intelligence. Factors associated with high performance included earlier use of binaural ear-level aids, better educated mothers, auditory/verbal or auditory/oral instruction, reliance on spoken language…

  6. The role of gestures in the transition from one- to two-word speech in a variety of children with intellectual disabilities.

    PubMed

    Vandereet, Joke; Maes, Bea; Lembrechts, Dirk; Zink, Inge

    2011-01-01

    Over the past decades the links between gesture and language have become intensively studied. For example, the emergence of requesting and commenting gestures has been found to signal the onset of intentional communication. Furthermore, in typically developing children, gestures play a transitional role in the acquisition of early lexical and syntactic milestones. Previous research has demonstrated that, particularly supplementary, gesture-word combinations not only precede, but also reliably predict the onset of two-word speech. However, the gestural correlates of two-word speech have rarely been studied in children with intellectual disabilities. The primary aim was to investigate developmental changes in speech and gesture use as well as to relate the use of gesture-word combinations to the onset of two-word speech in children with intellectual disabilities. A supplementary aim was to investigate differences in speech and gesture use between requests and comments in children with intellectual disabilities. Participants in this study were 16 children with intellectual disabilities (eight girls, eight boys). Chronological ages at the start of the study were between 3;1 and 5;7 years; mental ages were between 1;5 and 3;3 years. Every 4 months within a 2-year period children's requests and comments were sampled during structured interactions. All gestures and words used communicatively to request and comment were transcribed. Although children's use of spoken words as well as the diversity in their spoken vocabularies increased over time, gestures were used with a constant rate over time. Temporal tendencies similar to those described in typically developing children were observed: gesture-word combinations typically preceded, rather than followed, two-word speech. Furthermore, gestures (deictic gestures in particular) were more often used to request than to comment. Overall, gestures were used as a transitional tool towards children's first two-word utterances

  7. Presentation format effects in working memory: the role of attention.

    PubMed

    Foos, Paul W; Goolkasian, Paula

    2005-04-01

    Four experiments are reported in which participants attempted to remember three or six concrete nouns, presented as pictures, spoken words, or printed words, while also verifying the accuracy of sentences. Hypotheses meant to explain the higher recall of pictures and spoken words over printed words were tested. Increasing the difficulty and changing the type of processing task from arithmetic to a visual/spatial reasoning task did not influence recall. An examination of long-term modality effects showed that those effects were not sufficient to explain the superior performance with spoken words and pictures. Only when we manipulated the allocation of attention to the items in the storage task by requiring the participants to articulate the items and by presenting the stimulus items under a degraded condition were we able to reduce or remove the effect of presentation format. The findings suggest that the better recall of pictures and spoken words over printed words result from the fact that under normal presentation conditions, printed words receive less processing attention than pictures and spoken words do.

  8. Information status and word order in Croatian Sign Language.

    PubMed

    Milkovic, Marina; Bradaric-Joncic, Sandra; Wilbur, Ronnie B

    2007-01-01

    This paper presents the results of research on information structure and word order in narrative sentences taken from signed short stories in Croatian Sign Language (HZJ). The basic word order in HZJ is SVO. Factors that result in other word orders include: reversible arguments, verb categories, locative constructions, contrastive focus, and prior context. Word order in context depends on communication rules, based on the relationship between old (theme) and new (rheme) information, which is predicated of the theme. In accordance with Grice's Maxim of Quantity, HZJ has a tendency to omit old information, or to reduce it to pronominal status. If old information is overtly signed in non-pronominal form, it precedes the rheme. We have observed a variety of sign language mechanisms that are used to show items of reduced contextual significance: use of assigned spatial location for previously introduced referents; eyegaze to indicate spatial location of previously introduced referents; use of the non-dominant hand for backgrounded information; use of a special category of signs known as classifiers as pronominal indicators of previously introduced referents; and complex noun phrases that allow a single occurrence of a noun to simultaneously serve multiple functions. These devices permit information to be conveyed without the need for separate signs for every referent, which would create longer constructions that could be taxing to both production and perception. The results of this research are compatible with well-known word order generalizations - HZJ has its own grammar, independent of spoken language, like any other sign language.

  9. Are written and spoken recall of text equivalent?

    PubMed

    Kellogg, Ronald T

    2007-01-01

    Writing is less practiced than speaking, graphemic codes are activated only in writing, and the retrieved representations of the text must be maintained in working memory longer because handwritten output is slower than speech. These extra demands on working memory could result in less effort being given to retrieval during written compared with spoken text recall. To test this hypothesis, college students read or heard Bartlett's "War of the Ghosts" and then recalled the text in writing or speech. Spoken recall produced more accurately recalled propositions and more major distortions (e.g., inferences) than written recall. The results suggest that writing reduces the retrieval effort given to reconstructing the propositions of a text.

  10. Handbook for Spoken Mathematics: (Larry's Speakeasy).

    ERIC Educational Resources Information Center

    Chang, Lawrence A.; And Others

    This handbook is directed toward those who have to deal with spoken mathematics, yet have insufficient background to know the correct verbal expression for the written symbolic one. It compiles consistent and well-defined ways of uttering mathematical expressions so listeners will receive clear, unambiguous, and well-pronounced representations.…

  11. A Reading of Eekwol's "Apprentice to the Mystery" as an Expression of Cree Youth's Cultural Role and Responsibility

    ERIC Educational Resources Information Center

    MacKay, Gail A.

    2010-01-01

    On a chilly Toronto evening in November 2005, an envelope was opened in a darkened auditorium, and the words spoken reached out across the land to Muskoday First Nation in Saskatchewan. No doubt Lindsay Knight's family was watching the televised Canadian Aboriginal Music Awards that night and would have felt elated to hear her being honored with…

  12. Effects of metric hierarchy and rhyme predictability on word duration in The Cat in the Hat.

    PubMed

    Breen, Mara

    2018-05-01

    Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular children's book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Predictors of Early Reading Skill in 5-Year-Old Children With Hearing Loss Who Use Spoken Language

    PubMed Central

    Ching, Teresa Y.C.; Crowe, Kathryn; Day, Julia; Seeto, Mark

    2013-01-01

    This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 5-year-old children with prelingual hearing losses ranging from mild to profound who communicated primarily using spoken language. All participants were fitted with hearing aids (n = 71) or cochlear implants (n = 30). They completed standardized assessments of PA, receptive vocabulary, letter knowledge, word and non-word reading, passage comprehension, math reasoning, and nonverbal cognitive ability. Multiple regressions revealed that PA (assessed using judgments of similarity based on words’ initial or final sounds) made a significant, independent contribution to children’s early reading ability (for both letters and words/non-words) after controlling for variation in receptive vocabulary, nonverbal cognitive ability, and a range of demographic variables (including gender, degree of hearing loss, communication mode, type of sensory device, age at fitting of sensory devices, and level of maternal education). Importantly, the relationship between PA and reading was specific to reading and did not generalize to another academic ability, math reasoning. Additional multiple regressions showed that letter knowledge (names or sounds) was superior in children whose mothers had undertaken post-secondary education, and that better receptive vocabulary was associated with less severe hearing loss, use of a cochlear implant, and earlier age at implant switch-on. Earlier fitting of hearing aids or cochlear implants was not, however, significantly associated with better PA or reading outcomes in this cohort of children, most of whom were fitted with sensory devices before 3 years of age. PMID:24563553

  14. Word Processing in Children With Autism Spectrum Disorders: Evidence From Event-Related Potentials.

    PubMed

    Sandbank, Micheal; Yoder, Paul; Key, Alexandra P

    2017-12-20

    This investigation was conducted to determine whether young children with autism spectrum disorders exhibited a canonical neural response to word stimuli and whether putative event-related potential (ERP) measures of word processing were correlated with a concurrent measure of receptive language. Additional exploratory analyses were used to examine whether the magnitude of the association between ERP measures of word processing and receptive language varied as a function of the number of word stimuli the participants reportedly understood. Auditory ERPs were recorded in response to spoken words and nonwords presented with equal probability in 34 children aged 2-5 years with a diagnosis of autism spectrum disorder who were in the early stages of language acquisition. Average amplitudes and amplitude differences between word and nonword stimuli within 200-500 ms were examined at left temporal (T3) and parietal (P3) electrode clusters. Receptive vocabulary size and the number of experimental stimuli understood were concurrently measured using the MacArthur-Bates Communicative Development Inventories. Across the entire participant group, word-nonword amplitude differences were diminished. The average word-nonword amplitude difference at T3 was related to receptive vocabulary only if 5 or more word stimuli were understood. If ERPs are to ever have clinical utility, their construct validity must be established by investigations that confirm their associations with predictably related constructs. These results contribute to accruing evidence, suggesting that a valid measure of auditory word processing can be derived from the left temporal response to words and nonwords. In addition, this measure can be useful even for participants who do not reportedly understand all of the words presented as experimental stimuli, though it will be important for researchers to track familiarity with word stimuli in future investigations. https://doi.org/10.23641/asha.5614840.

  15. A Closer Look at Phonology as a Predictor of Spoken Sentence Processing and Word Reading

    ERIC Educational Resources Information Center

    Myers, Suzanne; Robertson, Erin K.

    2015-01-01

    The goal of this study was to tease apart the roles of phonological awareness (pA) and phonological short-term memory (pSTM) in sentence comprehension, sentence production, and word reading. Children 6- to 10-years of age (N = 377) completed standardized tests of pA ("Elision") and pSTM ("Nonword Repetition") from the…

  16. Neural correlates of conflict between gestures and words: A domain-specific role for a temporal-parietal complex.

    PubMed

    Noah, J Adam; Dravida, Swethasri; Zhang, Xian; Yahil, Shaul; Hirsch, Joy

    2017-01-01

    The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words ("yes" and "no") were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down), or head shaking (side-to-side), thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words "red" and "green" in either red or green font and were asked to identify the color of the letters. We observed a classic "Stroop" behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS) were increased in the right dorsolateral prefrontal cortex (DLPFC) for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ), superior temporal gyrus (STG), supramarginal gyrus (SMG), and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to the left

  17. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    ERIC Educational Resources Information Center

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  18. SPOKEN CUZCO QUECHUA, UNITS 7-12.

    ERIC Educational Resources Information Center

    SOLA, DONALD F.; AND OTHERS

    THIS SECOND VOLUME OF AN INTRODUCTORY COURSE IN SPOKEN CUZCO QUECHUA ALSO COMPRISES ENOUGH MATERIAL FOR ONE INTENSIVE SUMMER SESSION COURSE OR ONE SEMESTER OF SEMI-INTENSIVE INSTRUCTION (120 CLASS HOURS). THE METHOD OF PRESENTATION IS ESSENTIALLY THE SAME AS IN THE FIRST VOLUME WITH FURTHER CONTRASTIVE, LINGUISTIC ANALYSIS OF ENGLISH-QUECHUA…

  19. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  20. The Penefit of Salience: Salient Accented, but Not Unaccented Words Reveal Accent Adaptation Effects

    PubMed Central

    Grohe, Ann-Kathrin; Weber, Andrea

    2016-01-01

    In two eye-tracking experiments, the effects of salience in accent training and speech accentedness on spoken-word recognition were investigated. Salience was expected to increase a stimulus' prominence and therefore promote learning. A training-test paradigm was used on native German participants utilizing an artificial German accent. Salience was elicited by two different criteria: production and listening training as a subjective criterion and accented (Experiment 1) and canonical test words (Experiment 2) as an objective criterion. During training in Experiment 1, participants either read single German words out loud and deliberately devoiced initial voiced stop consonants (e.g., Balken—“beam” pronounced as *Palken), or they listened to pre-recorded words with the same accent. In a subsequent eye-tracking experiment, looks to auditorily presented target words with the accent were analyzed. Participants from both training conditions fixated accented target words more often than a control group without training. Training was identical in Experiment 2, but during test, canonical German words that overlapped in onset with the accented words from training were presented as target words (e.g., Palme—“palm tree” overlapped in onset with the training word *Palken) rather than accented words. This time, no training effect was observed; recognition of canonical word forms was not affected by having learned the accent. Therefore, accent learning was only visible when the accented test tokens in Experiment 1, which were not included in the test of Experiment 2, possessed sufficient salience based on the objective criterion “accent.” These effects were not modified by the subjective criterion of salience from the training modality. PMID:27375540

  1. When do combinatorial mechanisms apply in the production of inflected words?

    PubMed

    Cholin, Joana; Rapp, Brenda; Miozzo, Michele

    2010-01-01

    A central question for theories of inflected word processing is to determine under what circumstances compositional procedures apply. Some accounts (e.g., the dual-mechanism model; Clahsen, 1999 ) propose that compositional processes only apply to verbs that take productive affixes. For all other verbs, inflected forms are assumed to be stored in the lexicon in a nondecomposed manner. This account makes clear predictions about the consequences of disruption to the lexical access mechanisms involved in the spoken production of inflected forms. Briefly, it predicts that nonproductive forms (which require lexical access) should be more affected than productive forms (which, depending on the language task, may not). We tested these predictions through the detailed analysis of the spoken production of a German-speaking individual with an acquired lexical impairment resulting from a stroke. Analyses of response accuracy, error types, and frequency effects revealed that combinatorial processes are not restricted to verbs that take productive inflections. On this basis, we propose an alternative account, the stem-based assembly model (SAM), which posits that combinatorial processes may be available to all stems and not only to those that combine with productive affixes.

  2. When do combinatorial mechanisms apply in the production of inflected words?

    PubMed Central

    Cholin, Joana; Rapp, Brenda; Miozzo, Michele

    2010-01-01

    A central question for theories of inflected word processing is to determine under what circumstances compositional procedures apply. Some accounts (e.g., the Dual Mechanism Model; Clahsen, 1999) propose that compositional processes only apply to verbs that take productive affixes. For all other verbs, inflected forms are assumed to be stored in the lexicon in a non-decomposed manner. This account makes clear predictions about the consequences of disruption to the lexical access mechanisms involved in the spoken production of inflected forms. Briefly, it predicts that non-productive forms (which require lexical access) should be more affected than productive forms (which, depending on the language task, may not). We tested these predictions through the detailed analysis of the spoken production of a German-speaking individual with an acquired lexical impairment resulting from a stroke. Analyses of response accuracy, error types, and frequency effects revealed that combinatorial processes are not restricted to verbs that take productive inflections. On this basis, we propose an alternative account, the Stem-based Assembly Model (SAM) that posits that combinatorial processes may be available to all stems, and not only those that combine with productive affixes. PMID:21104479

  3. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    PubMed Central

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980

  4. SPOKEN AYACUCHO QUECHUA, UNITS 11-20.

    ERIC Educational Resources Information Center

    PARKER, GARY J.; SOLA, DONALD F.

    THE ESSENTIALS OF AYACUCHO GRAMMAR WERE PRESENTED IN THE FIRST VOLUME OF THIS SERIES, SPOKEN AYACUCHO QUECHUA, UNITS 1-10. THE 10 UNITS IN THIS VOLUME (11-20) ARE INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE, AND PRESENT THE STUDENT WITH LENGTHIER AND MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS AS WELL…

  5. Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects

    ERIC Educational Resources Information Center

    Wiseheart, Rebecca; Altmann, Lori J. P.

    2018-01-01

    Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…

  6. The impact of intonation and valence on objective and subjective attention capture by auditory alarms.

    PubMed

    Ljungberg, Jessica K; Parmentier, Fabrice

    2012-10-01

    The objective was to study the involuntary capture of attention by spoken words varying in intonation and valence. In studies of verbal alarms, the propensity of alarms to capture attention has been primarily assessed with the use of subjective ratings of their perceived urgency. Past studies suggest that such ratings vary with the alarms' spoken urgency and content. We measured attention capture by spoken words varying in valence (negative vs. neutral) and intonation (urgently vs. nonurgently spoken) through subjective ratings and behavioral measures. The key behavioral measure was the response latency to visual stimuli in the presence of spoken words breaking away from the periodical repetition of a tone. The results showed that all words captured attention relative to a baseline standard tone but that this effect was partly counteracted by a relative speeding of responses for urgently compared with nonurgently spoken words. Word valence did not affect behavioral performance. Rating data showed that both intonation and valence increased significantly perceived urgency and attention grabbing without any interaction. The data suggest a congruency between subjective ratings and behavioral performance with respect to spoken intonation but not valence. This study demonstrates the usefulness and feasibility of objective measures of attention capture to help design efficient alarm systems.

  7. Lacquered Words: The Evolution of Vietnamese under Sinitic Influences from the 1st Century B.C.E. through the 17th Century C.E.

    ERIC Educational Resources Information Center

    Phan, John Duong

    2013-01-01

    As much as three quarters of the modern Vietnamese lexicon is of Chinese origin. The majority of these words are often assumed to have originated in much the same manner as late Sino-Korean and Sino-Japanese borrowed forms: by rote memorization of reading glosses that were acquired through limited exposure to spoken Sinitic. However, under closer…

  8. A Grammar of Spoken Brazilian Portuguese.

    ERIC Educational Resources Information Center

    Thomas, Earl W.

    This is a first-year text of Portuguese grammar based on the Portuguese of moderately educated Brazilians from the area around Rio de Janeiro. Spoken idiomatic usage is emphasized. An important innovation is found in the presentation of verb tenses; they are presented in the order in which the native speaker learns them. The text is intended to…

  9. Trajectories of cognitive development during adolescence among youth at-risk for schizophrenia.

    PubMed

    Dickson, Hannah; Cullen, Alexis E; Jones, Rebecca; Reichenberg, Abraham; Roberts, Ruth E; Hodgins, Sheilagh; Morris, Robin G; Laurens, Kristin R

    2018-04-23

    Among adults with schizophrenia, evidence suggests that premorbid deficits in different cognitive domains follow distinct developmental courses during childhood and adolescence. The aim of this study was to delineate trajectories of adolescent cognitive functions prospectively among different groups of youth at-risk for schizophrenia, relative to their typically developing (TD) peers. Using linear mixed models adjusted for sex, ethnicity, parental occupation and practice effects, cognitive development between ages 9 and 16 years was compared for youth characterised by a triad of well-replicated developmental antecedents of schizophrenia (ASz; N = 32) and youth with a least one affected relative with schizophrenia or schizoaffective disorder (FHx; N = 29), relative to TD youth (N = 45). Participants completed measures of IQ, scholastic achievement, memory and executive function at three time-points, separated by approximately 24-month intervals. Compared to TD youth, both ASz and FHx youth displayed stable developmental deficits in verbal working memory and inhibition/switching executive functions. ASz youth additionally presented with stable deficits in measures of vocabulary (IQ), word reading, numerical operations, and category fluency executive function, and a slower rate of growth (developmental lag) on spelling from 9 to 16 years than TD peers. Conversely, faster rates of growth relative to TD peers (developmental delay) were observed on visual and verbal memory, and on category fluency executive function (ASz youth only) and on matrix reasoning (IQ) and word reading (FHx youth only). These differential patterns of deviation from normative adolescent cognitive development among at-risk youth imply potential for cognitive rehabilitation targeting of specific cognitive deficits at different developmental phases. © 2018 The Authors. Journal of Child Psychology and Psychiatry published by John Wiley & Sons Ltd on behalf of Association for Child and

  10. Brain regions underlying word finding difficulties in temporal lobe epilepsy.

    PubMed

    Trebuchon-Da Fonseca, Agnes; Guedj, Eric; Alario, F-Xavier; Laguitton, Virginie; Mundler, Olivier; Chauvel, Patrick; Liegeois-Chauvel, Catherine

    2009-10-01

    Word finding difficulties are often reported by epileptic patients with seizures originating from the language dominant cerebral hemisphere, for example, in temporal lobe epilepsy. Evidence regarding the brain regions underlying this deficit comes from studies of peri-operative electro-cortical stimulation, as well as post-surgical performance. This evidence has highlighted a role for the anterior part of the dominant temporal lobe in oral word production. These conclusions contrast with findings from activation studies involving healthy speakers or acute ischaemic stroke patients, where the region most directly related to word retrieval appears to be the posterior part of the left temporal lobe. To clarify the neural basis of word retrieval in temporal lobe epilepsy, we tested forty-three drug-resistant temporal lobe epilepsy patients (28 left, 15 right). Comprehensive neuropsychological and language assessments were performed. Single spoken word production was elicited with picture or definition stimuli. Detailed analysis allowed the distinction of impaired word retrieval from other possible causes of naming failure. Finally, the neural substrate of the deficit was assessed by correlating word retrieval performance and resting-state brain metabolism in 18 fluoro-2-deoxy-d-glucose-Positron Emission Tomography. Naming difficulties often resulted from genuine word retrieval failures (anomic states), both in picture and in definition tasks. Left temporal lobe epilepsy patients showed considerably worse performance than right temporal lobe epilepsy patients. Performance was poorer in the definition than in the picture task. Across patients and the left temporal lobe epilepsy subgroup, frequency of anomic state was negatively correlated with resting-state brain metabolism in left posterior and basal temporal regions (Brodmann's area 20-37-39). These results show the involvement of posterior temporal regions, within a larger antero-posterior-basal temporal network, in

  11. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension

    PubMed Central

    Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  12. A Picture’s Worth a Thousand Words: Engaging Youth in CBPR Using the Creative Arts

    PubMed Central

    Yonas, Michael A.; Burke, Jessica G.; Rak, Kimberly; Bennett, Antoine; Kelly, Vera; Gielen, Andrea C.

    2010-01-01

    Background Engaging youth and incorporating their unique expertise into the research process is important when addressing issues related to their health. Visual Voices is an arts-based participatory data collection method designed to work together with young people and communities to collaboratively elicit, examine, and celebrate the perspectives of youth. Objectives To present a process for using the creative arts with young people as a participatory data collection method and to give examples of their perspectives on safety and violence. Methods Using the creative arts, this study examined and illustrates the perspectives of how community factors influence safety and violence. Visual Voices was conducted with a total of 22 African-American youth in two urban neighborhoods. This method included creative arts-based writing, drawing, and painting activities designed to yield culturally relevant data generated and explored by youth. Qualitative data were captured through the creative content of writings, drawings, and paintings created by the youths as well as transcripts from audio recorded group discussion. Data was analyzed for thematic content and triangulated across traditional and nontraditional mediums. Findings were interpreted with participants and shared publicly for further reflection and utilization. Conclusion The youth participants identified a range of issues related to community factors, community safety, and violence. Such topics included the role of schools and social networks within the community as safe places and corner stores and abandoned houses as unsafe places. Visual Voices is a creative research method that provides a unique opportunity for youth to generate a range of ideas through access to the multiple creative methods provided. It is an innovative process that generates rich and valuable data about topics of interest and the lived experiences of young community members. PMID:20097996

  13. Do individuals with autism process words in context? Evidence from language-mediated eye-movements.

    PubMed

    Brock, Jon; Norbury, Courtenay; Einav, Shiri; Nation, Kate

    2008-09-01

    It is widely argued that people with autism have difficulty processing ambiguous linguistic information in context. To investigate this claim, we recorded the eye-movements of 24 adolescents with autism spectrum disorder and 24 language-matched peers as they monitored spoken sentences for words corresponding to objects on a computer display. Following a target word, participants looked more at a competitor object sharing the same onset than at phonologically unrelated objects. This effect was, however, mediated by the sentence context such that participants looked less at the phonological competitor if it was semantically incongruous with the preceding verb. Contrary to predictions, the two groups evidenced similar effects of context on eye-movements. Instead, across both groups, the effect of sentence context was reduced in individuals with relatively poor language skills. Implications for the weak central coherence account of autism are discussed.

  14. SPOKEN AYACUCHO QUECHUA. UNITS 1-10.

    ERIC Educational Resources Information Center

    PARKER, GARY J.; SOLA, DONALD F.

    THIS BEGINNING COURSE IN AYACUCHO QUECHUA, SPOKEN BY ABOUT A MILLION PEOPLE IN SOUTH-CENTRAL PERU, WAS PREPARED TO INTRODUCE THE PHONOLOGY AND GRAMMAR OF THIS DIALECT TO SPEAKERS OF ENGLISH. THE FIRST OF TWO VOLUMES, IT SERVES AS A TEXT FOR A 6-WEEK INTENSIVE COURSE OF 20 CLASS HOURS A WEEK. THE AUTHORS COMPARE AND CONTRAST SIGNIFICANT FEATURES OF…

  15. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    PubMed

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some

  16. Exploring detection of contact vs. fantasy online sexual offenders in chats with minors: Statistical discourse analysis of self-disclosure and emotion words.

    PubMed

    Chiu, Ming Ming; Seigfried-Spellar, Kathryn C; Ringenberg, Tatiana R

    2018-07-01

    This exploratory study is the first to identify content differences between youths' online chats with contact child sex offenders (CCSOs; seek to meet with youths) and those with fantasy child sex offenders (FCSOs; do not meet with youths) using statistical discourse analysis (SDA). Past studies suggest that CCSOs share their experiences and emotions with targeted youths (self-disclosure grooming tactic) and encourage them to reciprocate, to build trust and closer relationships through a cycle of self-disclosures. In this study, we examined 36,029 words in 4,353 messages within 107 anonymized online chat sessions by 21 people, specifically 12 youths and 9 arrested sex offenders (5 CCSOs and 4 FCSOs), using SDA. Results showed that CCSOs were more likely than FCSOs to write online messages with specific words (first person pronouns, negative emotions and positive emotions), suggesting the use of self-disclosure grooming tactics. CCSO's self-disclosure messages elicited corresponding self-disclosure messages from their targeted youths. These results suggest that CCSOs use grooming tactics that help engender youths' trust to meet in the physical world, but FCSOs do not. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. The influence of lexical characteristics and talker accent on the recognition of English words by speakers of Japanese.

    PubMed

    Yoneyama, Kiyoko; Munson, Benjamin

    2017-02-01

    Whether or not the influence of listeners' language proficiency on L2 speech recognition was affected by the structure of the lexicon was examined. This specific experiment examined the effect of word frequency (WF) and phonological neighborhood density (PND) on word recognition in native speakers of English and second-language (L2) speakers of English whose first language was Japanese. The stimuli included English words produced by a native speaker of English and English words produced by a native speaker of Japanese (i.e., with Japanese-accented English). The experiment was inspired by the finding of Imai, Flege, and Walley [(2005). J. Acoust. Soc. Am. 117, 896-907] that the influence of talker accent on speech intelligibility for L2 learners of English whose L1 is Spanish varies as a function of words' PND. In the currently study, significant interactions between stimulus accentedness and listener group on the accuracy and speed of spoken word recognition were found, as were significant effects of PND and WF on word-recognition accuracy. However, no significant three-way interaction among stimulus talker, listener group, and PND on either measure was found. Results are discussed in light of recent findings on cross-linguistic differences in the nature of the effects of PND on L2 phonological and lexical processing.

  18. Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode

    PubMed Central

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976

  19. Visual word recognition in deaf readers: lexicality is modulated by communication mode.

    PubMed

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

  20. Neighborhood Density and Syntactic Class Effects on Spoken Word Recognition: Specific Language Impairment and Typical Development

    ERIC Educational Resources Information Center

    Hoover, Jill R.

    2018-01-01

    Purpose: The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD). Method: Fifteen children with SLI ("M" age = 6;5 [years;months]) and 15 with TD ("M" age = 6;4) completed a…