Selective attention in perceptual adjustments to voice.
Mullennix, J W; Howe, J N
1999-10-01
The effects of perceptual adjustments to voice information on the perception of isolated spoken words were examined. In two experiments, spoken target words were preceded or followed within a trial by a neutral word spoken in the same voice or in a different voice as the target. Over-all, words were reproduced more accurately on trials on which the voice of the neutral word matched the voice of the spoken target word, suggesting that perceptual adjustments to voice interfere with word processing. This result, however, was mediated by selective attention to voice. The results provide further evidence of a close processing relationship between perceptual adjustments to voice and spoken word recognition.
Mani, Nivedita; Huettig, Falk
2014-10-01
Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.
Famous talker effects in spoken word recognition.
Maibauer, Alisa M; Markis, Teresa A; Newell, Jessica; McLennan, Conor T
2014-01-01
Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.
Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hagiwara, Hiroko
2011-04-01
Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children's native-language acquisition. In a picture-word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n=80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children's onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7. Copyright © 2011 Elsevier Ltd. All rights reserved.
The time course of morphological processing during spoken word recognition in Chinese.
Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan
2017-12-01
We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.
The Temporal Structure of Spoken Language Understanding.
ERIC Educational Resources Information Center
Marslen-Wilson, William; Tyler, Lorraine Komisarjevsky
1980-01-01
An investigation of word-by-word time-course of spoken language understanding focused on word recognition and structural and interpretative processes. Results supported an online interactive language processing theory, in which lexical, structural, and interpretative knowledge sources communicate and interact during processing efficiently and…
Presentation format effects in working memory: the role of attention.
Foos, Paul W; Goolkasian, Paula
2005-04-01
Four experiments are reported in which participants attempted to remember three or six concrete nouns, presented as pictures, spoken words, or printed words, while also verifying the accuracy of sentences. Hypotheses meant to explain the higher recall of pictures and spoken words over printed words were tested. Increasing the difficulty and changing the type of processing task from arithmetic to a visual/spatial reasoning task did not influence recall. An examination of long-term modality effects showed that those effects were not sufficient to explain the superior performance with spoken words and pictures. Only when we manipulated the allocation of attention to the items in the storage task by requiring the participants to articulate the items and by presenting the stimulus items under a degraded condition were we able to reduce or remove the effect of presentation format. The findings suggest that the better recall of pictures and spoken words over printed words result from the fact that under normal presentation conditions, printed words receive less processing attention than pictures and spoken words do.
Shen, Wei; Qu, Qingqing; Li, Xingshan
2016-07-01
In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.
Orthographic effects in spoken word recognition: Evidence from Chinese.
Qu, Qingqing; Damian, Markus F
2017-06-01
Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.
Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi
2013-01-01
Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.
Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi
2013-01-01
Objectives Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Experimental design Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. Principal findings The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Conclusions Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words. PMID:24391713
Individual Differences in Online Spoken Word Recognition: Implications for SLI
ERIC Educational Resources Information Center
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
2010-01-01
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…
Chen, Yi-Chuan; Spence, Charles
2013-01-01
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
Rapid modulation of spoken word recognition by visual primes.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2016-02-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.
Rapid modulation of spoken word recognition by visual primes
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2015-01-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296
Electrophysiological Responses to Coarticulatory and Word Level Miscues
ERIC Educational Resources Information Center
Archibald, Lisa M. D.; Joanisse, Marc F.
2011-01-01
The influence of coarticulation cues on spoken word recognition is not yet well understood. This acoustic/phonetic variation may be processed early and recognized as sensory noise to be stripped away, or it may influence processing at a later prelexical stage. The present study used event-related potentials (ERPs) in a picture/spoken word matching…
L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm
ERIC Educational Resources Information Center
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
2016-01-01
The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…
The Slow Developmental Time Course of Real-Time Spoken Word Recognition
ERIC Educational Resources Information Center
Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob
2015-01-01
This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…
Ostarek, Markus; Huettig, Falk
2017-03-01
The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Individual differences in online spoken word recognition: Implications for SLI
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
2012-01-01
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014
Strand, Julia F; Sommers, Mitchell S
2011-09-01
Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America
An event-related potential study of memory for words spoken aloud or heard.
Wilding, E L; Rugg, M D
1997-09-01
Subjects made old/new recognition judgements to visually presented words, half of which had been encountered in a prior study phase. For each word judged old, subjects made a subsequent source judgement, indicating whether they had pronounced the word aloud at study (spoken words), or whether they had heard the word spoken to them (heard words). Event-related potentials (ERPs) were compared for three classes of test item; words correctly judged to be new (correct rejections), and spoken and heard words that were correctly assigned to source (spoken hit/hit and heard hit/hit response categories). Consistent with previous findings (Wilding, E. L. and Rugg, M. D., Brain, 1996, 119, 889-905), two temporally and topographically dissociable components, with parietal and frontal maxima respectively, differentiated the ERPs to the hit/hit and correct rejection response categories. In addition, there was some evidence that the frontally distributed component could be decomposed into two distinct components, only one of which differentiated the two classes of hit/hit ERPs. The findings suggest that at least three functionally and neurologically dissociable processes can contribute to successful recovery of source information.
Gwilliams, L; Marantz, A
2015-08-01
Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
The effects of sad prosody on hemispheric specialization for words processing.
Leshem, Rotem; Arzouan, Yossi; Armony-Sivan, Rinat
2015-06-01
This study examined the effect of sad prosody on hemispheric specialization for word processing using behavioral and electrophysiological measures. A dichotic listening task combining focused attention and signal-detection methods was conducted to evaluate the detection of a word spoken in neutral or sad prosody. An overall right ear advantage together with leftward lateralization in early (150-170 ms) and late (240-260 ms) processing stages was found for word detection, regardless of prosody. Furthermore, the early stage was most pronounced for words spoken in neutral prosody, showing greater negative activation over the left than the right hemisphere. In contrast, the later stage was most pronounced for words spoken with sad prosody, showing greater positive activation over the left than the right hemisphere. The findings suggest that sad prosody alone was not sufficient to modulate hemispheric asymmetry in word-level processing. We posit that lateralized effects of sad prosody on word processing are largely dependent on the psychoacoustic features of the stimuli as well as on task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
The influence of speech rate and accent on access and use of semantic information.
Sajin, Stanislav M; Connine, Cynthia M
2017-04-01
Circumstances in which the speech input is presented in sub-optimal conditions generally lead to processing costs affecting spoken word recognition. The current study indicates that some processing demands imposed by listening to difficult speech can be mitigated by feedback from semantic knowledge. A set of lexical decision experiments examined how foreign accented speech and word duration impact access to semantic knowledge in spoken word recognition. Results indicate that when listeners process accented speech, the reliance on semantic information increases. Speech rate was not observed to influence semantic access, except in the setting in which unusually slow accented speech was presented. These findings support interactive activation models of spoken word recognition in which attention is modulated based on speech demands.
ERIC Educational Resources Information Center
Malins, Jeffrey G.; Joanisse, Marc F.
2012-01-01
We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following…
Evans, Julia L; Gillam, Ronald B; Montgomery, James W
2018-05-10
This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.
Differential Processing of Thematic and Categorical Conceptual Relations in Spoken Word Production
ERIC Educational Resources Information Center
de Zubicaray, Greig I.; Hansen, Samuel; McMahon, Katie L.
2013-01-01
Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference…
Individual Differences in Inhibitory Control Relate to Bilingual Spoken Word Processing
ERIC Educational Resources Information Center
Mercier, Julie; Pivneva, Irina; Titone, Debra
2014-01-01
We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., "field") and looked at pictures corresponding to the target, a within-language competitor…
Sommers, M S; Kirk, K I; Pisoni, D B
1997-04-01
The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH, NMNH, and CI listeners engage similar perceptual operations to identify spoken words. Implications of these findings for the design of new test batteries that can provide comprehensive evaluations of the individual capacities needed for processing spoken language are discussed.
Frisch, Stefan A.; Pisoni, David B.
2012-01-01
Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784
ERIC Educational Resources Information Center
Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony
2013-01-01
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…
A Spoken Word Count (Children--Ages 5, 6 and 7).
ERIC Educational Resources Information Center
Wepman, Joseph M.; Hass, Wilbur
Relatively little research has been done on the quantitative characteristics of children's word usage. This spoken count was undertaken to investigate those aspects of word usage and frequency which could cast light on lexical processes in grammar and verbal development in children. Three groups of 30 children each (boys and girls) from…
Marsh, John E.; Ljung, Robert; Nöstl, Anatole; Threadgold, Emma; Campbell, Tom A.
2015-01-01
A dynamic interplay is known to exist between auditory processing and human cognition. For example, prior investigations of speech-in-noise have revealed there is more to learning than just listening: Even if all words within a spoken list are correctly heard in noise, later memory for those words is typically impoverished. These investigations supported a view that there is a “gap” between the intelligibility of speech and memory for that speech. Here, the notion was that this gap between speech intelligibility and memorability is a function of the extent to which the spoken message seizes limited immediate memory resources (e.g., Kjellberg et al., 2008). Accordingly, the more difficult the processing of the spoken message, the less resources are available for elaboration, storage, and recall of that spoken material. However, it was not previously known how increasing that difficulty affected the memory processing of semantically rich spoken material. This investigation showed that noise impairs higher levels of cognitive analysis. A variant of the Deese-Roediger-McDermott procedure that encourages semantic elaborative processes was deployed. On each trial, participants listened to a 36-item list comprising 12 words blocked by each of 3 different themes. Each of those 12 words (e.g., bed, tired, snore…) was associated with a “critical” lure theme word that was not presented (e.g., sleep). Word lists were either presented without noise or at a signal-to-noise ratio of 5 decibels upon an A-weighting. Noise reduced false recall of the critical words, and decreased the semantic clustering of recall. Theoretical and practical implications are discussed. PMID:26052289
Action and object word writing in a case of bilingual aphasia.
Kambanaros, Maria; Messinis, Lambros; Anyfantis, Emmanouil
2012-01-01
We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.
An ERP Investigation of Regional and Foreign Accent Processing
ERIC Educational Resources Information Center
Goslin, Jeremy; Duffy, Hester; Floccia, Caroline
2012-01-01
This study used event-related potentials (ERPs) to examine whether we employ the same normalisation mechanisms when processing words spoken with a regional accent or foreign accent. Our results showed that the Phonological Mapping Negativity (PMN) following the onset of the final word of sentences spoken with an unfamiliar regional accent was…
ERIC Educational Resources Information Center
Pitt, Mark A.
2009-01-01
One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments [Gaskell, G., & Marslen-Wilson, W. D. (1998). Mechanisms of phonological inference in speech perception.…
Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements
Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.
2016-01-01
In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424
Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy
2012-06-01
Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.
Relationships between Lexical Processing Speed, Language Skills, and Autistic Traits in Children
ERIC Educational Resources Information Center
Abrigo, Erin
2012-01-01
According to current models of spoken word recognition listeners understand speech as it unfolds over time. Eye tracking provides a non-invasive, on-line method to monitor attention, providing insight into the processing of spoken language. In the current project a spoken lexical processing assessment (LPA) confirmed current theories of spoken…
Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing
ERIC Educational Resources Information Center
Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.
2016-01-01
Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…
Locus of word frequency effects in spelling to dictation: Still at the orthographic level!
Bonin, Patrick; Laroche, Betty; Perret, Cyril
2016-11-01
The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
McQueen, James M; Huettig, Falk
2014-01-01
Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.
L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
2016-10-01
The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.
Hemispheric Differences in Indexical Specificity Effects in Spoken Word Recognition
ERIC Educational Resources Information Center
Gonzalez, Julio; McLennan, Conor T.
2007-01-01
Variability in talker identity, one type of indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. Furthermore, neuropsychological evidence suggests that indexical and linguistic information may be represented and processed differently in the 2 cerebral hemispheres, and is consistent with findings from…
Reading Spoken Words: Orthographic Effects in Auditory Priming
ERIC Educational Resources Information Center
Chereau, Celine; Gaskell, M. Gareth; Dumay, Nicolas
2007-01-01
Three experiments examined the involvement of orthography in spoken word processing using a task--unimodal auditory priming with offset overlap--taken to reflect activation of prelexical representations. Two types of prime-target relationship were compared; both involved phonological overlap, but only one had a strong orthographic overlap (e.g.,…
Recognizing Spoken Words: The Neighborhood Activation Model
Luce, Paul A.; Pisoni, David B.
2012-01-01
Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults. PMID:9504270
On-Line Orthographic Influences on Spoken Language in a Semantic Task
ERIC Educational Resources Information Center
Pattamadilok, Chotiga; Perre, Laetitia; Dufau, Stephane; Ziegler, Johannes C.
2009-01-01
Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a…
Infant perceptual development for faces and spoken words: An integrated approach
Watson, Tamara L; Robbins, Rachel A; Best, Catherine T
2014-01-01
There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626
Path-Length and the Misperception of Speech: Insights from Network Science and Psycholinguistics
NASA Astrophysics Data System (ADS)
Vitevitch, Michael S.; Goldstein, Rutherford; Johnson, Elizabeth
Using the analytical methods of network science we examined what could be retrieved from the lexicon when a spoken word is misperceived. To simulate misperceptions in the laboratory, we used a variant of the semantic associates task—the phonological associate task—in which participants heard an English word and responded with the first word that came to mind that sounded like the word they heard, to examine what people actually do retrieve from the lexicon when a spoken word is misperceived. Most responses were 1 link away from the stimulus word in the lexical network. Distant neighbors (words >1 link) were provided more often as responses when the stimulus word had low rather than high degree. Finally, even very distant neighbors tended to be connected to the stimulus word by a path in the lexical network. These findings have implications for the processing of spoken words, and highlight the valuable insights that can be obtained by combining the analytic tools of network science with the experimental tasks of psycholinguistics.
Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.
2011-01-01
The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319
Examining the Time Course of Indexical Specificity Effects in Spoken Word Recognition
ERIC Educational Resources Information Center
McLennan, Conor T.; Luce, Paul A.
2005-01-01
Variability in talker identity and speaking rate, commonly referred to as indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. The present study examines the time course of indexical specificity effects to evaluate the hypothesis that such effects occur relatively late in the perceptual processing of…
The Impact of Orthographic Consistency on German Spoken Word Identification
ERIC Educational Resources Information Center
Beyermann, Sandra; Penke, Martina
2014-01-01
An auditory lexical decision experiment was conducted to find out whether sound-to-spelling consistency has an impact on German spoken word processing, and whether such an impact is different at different stages of reading development. Four groups of readers (school children in the second, third and fifth grades, and university students)…
Implicit Processing of Phonotactic Cues: Evidence from Electrophysiological and Vascular Responses
ERIC Educational Resources Information Center
Rossi, Sonja; Jurgenson, Ina B.; Hanulikova, Adriana; Telkemeyer, Silke; Wartenburger, Isabell; Obrig, Hellmuth
2011-01-01
Spoken word recognition is achieved via competition between activated lexical candidates that match the incoming speech input. The competition is modulated by prelexical cues that are important for segmenting the auditory speech stream into linguistic units. One such prelexical cue that listeners rely on in spoken word recognition is phonotactics.…
Hearing taboo words can result in early talker effects in word recognition for female listeners.
Tuft, Samantha E; MᶜLennan, Conor T; Krestar, Maura L
2018-02-01
Previous spoken word recognition research using the long-term repetition-priming paradigm found performance costs for stimuli mismatching in talker identity. That is, when words were repeated across the two blocks, and the identity of the talker changed reaction times (RTs) were slower than when the repeated words were spoken by the same talker. Such performance costs, or talker effects, followed a time course, occurring only when processing was relatively slow. More recent research suggests that increased explicit and implicit attention towards the talkers can result in talker effects even during relatively fast processing. The purpose of the current study was to examine whether word meaning would influence the pattern of talker effects in an easy lexical decision task and, if so, whether results would differ depending on whether the presentation of neutral and taboo words was mixed or blocked. Regardless of presentation, participants responded to taboo words faster than neutral words. Furthermore, talker effects for the female talker emerged when participants heard both taboo and neutral words (consistent with an attention-based hypothesis), but not for participants that heard only taboo or only neutral words (consistent with the time-course hypothesis). These findings have important implications for theoretical models of spoken word recognition.
Dissociation of tone and vowel processing in Mandarin idioms.
Hu, Jiehui; Gao, Shan; Ma, Weiyi; Yao, Dezhong
2012-09-01
Using event-related potentials, this study measured the access of suprasegmental (tone) and segmental (vowel) information in spoken word recognition with Mandarin idioms. Participants performed a delayed-response acceptability task, in which they judged the correctness of the last word of each idiom, which might deviate from the correct word in either tone or vowel. Results showed that, compared with the correct idioms, a larger early negativity appeared only for vowel violation. Additionally, a larger N400 effect was observed for vowel mismatch than tone mismatch. A control experiment revealed that these differences were not due to low-level physical differences across conditions; instead, they represented the greater constraining power of vowels than tones in the lexical selection and semantic integration of the spoken words. Furthermore, tone violation elicited a more robust late positive component than vowel violation, suggesting different reanalyses of the two types of information. In summary, the current results support a functional dissociation of tone and vowel processing in spoken word recognition. Copyright © 2012 Society for Psychophysiological Research.
Context Effects and Spoken Word Recognition of Chinese: An Eye-Tracking Study
ERIC Educational Resources Information Center
Yip, Michael C. W.; Zhai, Mingjun
2018-01-01
This study examined the time-course of context effects on spoken word recognition during Chinese sentence processing. We recruited 60 native Mandarin listeners to participate in an eye-tracking experiment. In this eye-tracking experiment, listeners were told to listen to a sentence carefully, which ended with a Chinese homophone, and look at…
ERIC Educational Resources Information Center
Cao, Fan; Khalid, Kainat; Zaveri, Rishi; Bolger, Donald J.; Bitan, Tali; Booth, James R.
2010-01-01
Priming effects were examined in 40 children (9-15 years old) using functional magnetic resonance imaging (fMRI). An orthographic judgment task required participants to determine if two sequentially presented spoken words had the same spelling for the rime. Four lexical conditions were designed: similar orthography and phonology (O[superscript…
A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension
ERIC Educational Resources Information Center
Ostarek, Markus; Huettig, Falk
2017-01-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…
The Effect of Talker Variability on Word Recognition in Preschool Children
Ryalls, Brigette Oliver; Pisoni, David B.
2012-01-01
In a series of experiments, the authors investigated the effects of talker variability on children’s word recognition. In Experiment 1, when stimuli were presented in the clear, 3- and 5-year-olds were less accurate at identifying words spoken by multiple talkers than those spoken by a single talker when the multiple-talker list was presented first. In Experiment 2, when words were presented in noise, 3-, 4-, and 5-year-olds again performed worse in the multiple-talker condition than in the single-talker condition, this time regardless of order; processing multiple talkers became easier with age. Experiment 3 showed that both children and adults were slower to repeat words from multiple-talker than those from single-talker lists. More important, children (but not adults) matched acoustic properties of the stimuli (specifically, duration). These results provide important new information about the development of talker normalization in speech perception and spoken word recognition. PMID:9149923
Pedagogy for Liberation: Spoken Word Poetry in Urban Schools
ERIC Educational Resources Information Center
Fiore, Mia
2015-01-01
The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…
ERIC Educational Resources Information Center
Dance, Frank E. X.
One of many aspects of the linguistic centrality of the spoken word is the "acoustic trigger" to conceptualization, the most significant primal trigger in human beings, which when activated results in contrast and comparison leading to symbolic conceptualization. The oral/aural mode, or vocal production and acoustic perception, is developmentally…
Cruse, Damian; Wilding, Edward L
2011-06-01
In a pair of recent studies, frontally distributed event-related potential (ERP) indices of two distinct post-retrieval processes were identified. It has been proposed that one of these processes operates over any kinds of task relevant information in service of task demands, while the other operates selectively over recovered contextual (episodic) information. The experiment described here was designed to test this account, by requiring retrieval of different kinds of contextual information to that required in previous relevant studies. Participants heard words spoken in either a male or female voice at study and ERPs were acquired at test where all words were presented visually. Half of the test words had been spoken at study. Participants first made an old/new judgment, distinguishing via key press between studied and unstudied words. For words judged 'old', participants indicated the voice in which the word had been spoken at study, and their confidence (high/low) in the voice judgment. There was evidence for only one of the two frontal old/new effects that had been identified in the previous studies. One possibility is that the ERP effect in previous studies that was tied specifically to recollection reflects processes operating over only some kinds of contextual information. An alternative is that the index reflects processes that are engaged primarily when there are few contextual features that distinguish between studied stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Bowers, Jeffrey S.; Davis, Colin J.; Mattys, Sven L.; Damian, Markus F.; Hanley, Derek
2009-01-01
Three picture-word interference (PWI) experiments assessed the extent to which embedded subset words are activated during the identification of spoken superset words (e.g., "bone" in "trombone"). Participants named aloud pictures (e.g., "brain") while spoken distractors were presented. In the critical condition,…
ERIC Educational Resources Information Center
Rama, Pia; Sirri, Louah; Serres, Josette
2013-01-01
Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…
Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.
Hunter, Cynthia R; Pisoni, David B
Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.
Talker and accent variability effects on spoken word recognition
NASA Astrophysics Data System (ADS)
Nyang, Edna E.; Rogers, Catherine L.; Nishi, Kanae
2003-04-01
A number of studies have shown that words in a list are recognized less accurately in noise and with longer response latencies when they are spoken by multiple talkers, rather than a single talker. These results have been interpreted as support for an exemplar-based model of speech perception, in which it is assumed that detailed information regarding the speaker's voice is preserved in memory and used in recognition, rather than being eliminated via normalization. In the present study, the effects of varying both accent and talker are investigated using lists of words spoken by (a) a single native English speaker, (b) six native English speakers, (c) three native English speakers and three Japanese-accented English speakers. Twelve /hVd/ words were mixed with multi-speaker babble at three signal-to-noise ratios (+10, +5, and 0 dB) to create the word lists. Native English-speaking listeners' percent-correct recognition for words produced by native English speakers across the three talker conditions (single talker native, multi-talker native, and multi-talker mixed native and non-native) and three signal-to-noise ratios will be compared to determine whether sources of speaker variability other than voice alone add to the processing demands imposed by simple (i.e., single accent) speaker variability in spoken word recognition.
Audiovisual speech facilitates voice learning.
Sheffert, Sonya M; Olson, Elizabeth
2004-02-01
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.
Instructional Benefits of Spoken Words: A Review of Cognitive Load Factors
ERIC Educational Resources Information Center
Kalyuga, Slava
2012-01-01
Spoken words have always been an important component of traditional instruction. With the development of modern educational technology tools, spoken text more often replaces or supplements written or on-screen textual representations. However, there could be a cognitive load cost involved in this trend, as spoken words can have both benefits and…
Influences of spoken word planning on speech recognition.
Roelofs, Ardi; Ozdemir, Rebecca; Levelt, Willem J M
2007-09-01
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. 2007 APA
When semantics aids phonology: A processing advantage for iconic word forms in aphasia.
Meteyard, Lotte; Stoppard, Emily; Snudden, Dee; Cappa, Stefano F; Vigliocco, Gabriella
2015-09-01
Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. "moo", "splash"). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage is due to a stronger connection between semantic information and phonological forms. Copyright © 2015 Elsevier Ltd. All rights reserved.
The socially weighted encoding of spoken words: a dual-route approach to speech perception.
Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B
2013-01-01
Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.
Understanding environmental sounds in sentence context.
Uddin, Sophia; Heald, Shannon L M; Van Hedger, Stephen C; Klos, Serena; Nusbaum, Howard C
2018-03-01
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Copyright © 2017 Elsevier B.V. All rights reserved.
The influence of talker and foreign-accent variability on spoken word identification.
Bent, Tessa; Holt, Rachael Frush
2013-03-01
In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.
How long-term memory and accentuation interact during spoken language comprehension.
Li, Xiaoqing; Yang, Yufang
2013-04-01
Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words). Copyright © 2013 Elsevier Ltd. All rights reserved.
Word Recognition in Auditory Cortex
ERIC Educational Resources Information Center
DeWitt, Iain D. J.
2013-01-01
Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…
A task-dependent causal role for low-level visual processes in spoken word comprehension.
Ostarek, Markus; Huettig, Falk
2017-08-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Differential processing of thematic and categorical conceptual relations in spoken word production.
de Zubicaray, Greig I; Hansen, Samuel; McMahon, Katie L
2013-02-01
Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference paradigm and functional magnetic resonance imaging (fMRI), we manipulated categorical (COW-rat) and thematic (COW-pasture) TARGET-distractor relations in a balanced design, finding interference and facilitation effects on naming latencies, respectively, as well as differential patterns of brain activation compared with an unrelated distractor condition. While both types of distractor relation activated the middle portion of the left middle temporal gyrus (MTG) consistent with retrieval of conceptual or lexical representations, categorical relations involved additional activation of posterior left MTG, consistent with retrieval of a lexical cohort. Thematic relations involved additional activation of the left angular gyrus. These results converge with recent lesion evidence implicating the left inferior parietal lobe in processing thematic relations and may indicate a potential role for this region during spoken word production. 2013 APA, all rights reserved
Scaling laws and model of words organization in spoken and written language
NASA Astrophysics Data System (ADS)
Bian, Chunhua; Lin, Ruokuang; Zhang, Xiaoyu; Ma, Qianli D. Y.; Ivanov, Plamen Ch.
2016-01-01
A broad range of complex physical and biological systems exhibits scaling laws. The human language is a complex system of words organization. Studies of written texts have revealed intriguing scaling laws that characterize the frequency of words occurrence, rank of words, and growth in the number of distinct words with text length. While studies have predominantly focused on the language system in its written form, such as books, little attention is given to the structure of spoken language. Here we investigate a database of spoken language transcripts and written texts, and we uncover that words organization in both spoken language and written texts exhibits scaling laws, although with different crossover regimes and scaling exponents. We propose a model that provides insight into words organization in spoken language and written texts, and successfully accounts for all scaling laws empirically observed in both language forms.
ERIC Educational Resources Information Center
Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.
2013-01-01
ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…
Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih
2017-06-05
The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.
ERIC Educational Resources Information Center
Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.
2017-01-01
Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…
Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study
ERIC Educational Resources Information Center
Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua
2012-01-01
Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…
Spectrotemporal processing drives fast access to memory traces for spoken words.
Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C
2012-05-01
The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.
The effect of background noise on the word activation process in nonnative spoken-word recognition.
Scharenborg, Odette; Coumans, Juul M J; van Hout, Roeland
2018-02-01
This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise. The native listeners outperformed the nonnative listeners in all listening conditions. Importantly, however, the effect of noise on the multiple activation process was found to be remarkably similar in native and nonnative listening. The presence of noise increased the set of candidate words considered for recognition in both native and nonnative listening. The results indicate that the observed performance differences between the English and Dutch listeners should not be primarily attributed to a differential effect of noise, but rather to the difference between native and nonnative listening. Additional analyses showed that word-initial information was found to be more important than word-final information during spoken-word recognition. When word-initial information was no longer reliably available word recognition accuracy dropped and word frequency information could no longer be used suggesting that word frequency information is strongly tied to the onset of words and the earliest moments of lexical access. Proficiency and inhibition ability were found to influence nonnative spoken-word recognition in noise, with a higher proficiency in the nonnative language and worse inhibition ability leading to improved recognition performance. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
ERIC Educational Resources Information Center
Broersma, Mirjam; Cutler, Anne
2008-01-01
L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two…
Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.
Huettig, Falk; Brouwer, Susanne
2015-05-01
It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.
Interpreting Chicken-Scratch: Lexical Access for Handwritten Words
ERIC Educational Resources Information Center
Barnhart, Anthony S.; Goldinger, Stephen D.
2010-01-01
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…
Hip-Hop Hamlet: Hybrid Interpretive Discourse in a Suburban High School English Class
ERIC Educational Resources Information Center
Anglin, Joanna L.; Smagorinsky, Peter
2014-01-01
This study investigates the collaborative composing processes of a group of five high school seniors who constructed interpretations of each of the five acts of Shakespeare's Hamlet through the medium of spoken word performances. The group composing processes were analyzed to identify how the students drew on conventions from the spoken word…
Words Get in the Way: Linguistic Effects on Talker Discrimination.
Narayan, Chandan R; Mak, Lorinda; Bialystok, Ellen
2017-07-01
A speech perception experiment provides evidence that the linguistic relationship between words affects the discrimination of their talkers. Listeners discriminated two talkers' voices with various linguistic relationships between their spoken words. Listeners were asked whether two words were spoken by the same person or not. Word pairs varied with respect to the linguistic relationship between the component words, forming either: phonological rhymes, lexical compounds, reversed compounds, or unrelated pairs. The degree of linguistic relationship between the words affected talker discrimination in a graded fashion, revealing biases listeners have regarding the nature of words and the talkers that speak them. These results indicate that listeners expect a talker's words to be linguistically related, and more generally, indexical processing is affected by linguistic information in a top-down fashion even when listeners are not told to attend to it. Copyright © 2016 Cognitive Science Society, Inc.
Speech perception and spoken word recognition: past and present.
Jusezyk, Peter W; Luce, Paul A
2002-02-01
The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.
Novel Spoken Word Learning in Adults with Developmental Dyslexia
ERIC Educational Resources Information Center
Conner, Peggy S.
2013-01-01
A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…
"A Unified Poet Alliance": The Personal and Social Outcomes of Youth Spoken Word Poetry Programming
ERIC Educational Resources Information Center
Weinstein, Susan
2010-01-01
This article places youth spoken word (YSW) poetry programming within the larger framework of arts education. Drawing primarily on transcripts of interviews with teen poets and adult teaching artists and program administrators, the article identifies specific benefits that participants ascribe to youth spoken word, including the development of…
Cognitive Control Influences the Use of Meaning Relations during Spoken Sentence Comprehension
ERIC Educational Resources Information Center
Boudewyn, Megan A.; Long, Debra L.; Swaab, Tamara Y.
2012-01-01
The aim of this study was to investigate individual differences in the influence of lexical association on word recognition during auditory sentence processing. Lexical associations among individual words (e.g. salt and pepper) represent one type of semantic information that is available during the processing of words in context. We predicted that…
Self-Selection of Vocabulary in Reading Instruction
ERIC Educational Resources Information Center
Peterson, Candida C.
1974-01-01
The child's process of learning to read was simulated by teaching adults to associate Chinese characters with spoken words. When the students chose words to be learned, learning was more rapid than when words were selected by the examiner from a basal reader. (Author/JA)
Ljungberg, Jessica K; Parmentier, Fabrice
2012-10-01
The objective was to study the involuntary capture of attention by spoken words varying in intonation and valence. In studies of verbal alarms, the propensity of alarms to capture attention has been primarily assessed with the use of subjective ratings of their perceived urgency. Past studies suggest that such ratings vary with the alarms' spoken urgency and content. We measured attention capture by spoken words varying in valence (negative vs. neutral) and intonation (urgently vs. nonurgently spoken) through subjective ratings and behavioral measures. The key behavioral measure was the response latency to visual stimuli in the presence of spoken words breaking away from the periodical repetition of a tone. The results showed that all words captured attention relative to a baseline standard tone but that this effect was partly counteracted by a relative speeding of responses for urgently compared with nonurgently spoken words. Word valence did not affect behavioral performance. Rating data showed that both intonation and valence increased significantly perceived urgency and attention grabbing without any interaction. The data suggest a congruency between subjective ratings and behavioral performance with respect to spoken intonation but not valence. This study demonstrates the usefulness and feasibility of objective measures of attention capture to help design efficient alarm systems.
Inferring Speaker Affect in Spoken Natural Language Communication
ERIC Educational Resources Information Center
Pon-Barry, Heather Roberta
2013-01-01
The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…
ERIC Educational Resources Information Center
Mishra, Ramesh Kumar; Singh, Niharika
2014-01-01
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech
ERIC Educational Resources Information Center
Yip, Michael C.
2016-01-01
Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…
"Poetry Does Really Educate": An Interview with Spoken Word Poet Luka Lesson
ERIC Educational Resources Information Center
Xerri, Daniel
2016-01-01
Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…
Attempting Arts Integration: Secondary Teachers' Experiences with Spoken Word Poetry
ERIC Educational Resources Information Center
Williams, Wendy R.
2018-01-01
Spoken word poetry is an art form that involves poetry writing and performance. Past research on spoken word has described the benefits for poets and looked at its use in pre-service teacher education; however, research is needed to understand how to assist in-service teachers in using this art form. During the 2016-2017 school year, 15 teachers…
Voice tracking and spoken word recognition in the presence of other voices
NASA Astrophysics Data System (ADS)
Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar
2004-12-01
We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.
Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih
2017-07-19
It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.
Eye movements during spoken word recognition in Russian children.
Sekerina, Irina A; Brooks, Patricia J
2007-09-01
This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing.
Spoken Word Recognition in Toddlers Who Use Cochlear Implants
Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.
2010-01-01
Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921
Spoken word recognition by Latino children learning Spanish as their first language*
HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE
2010-01-01
Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157
ERIC Educational Resources Information Center
Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.
2013-01-01
Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…
ERIC Educational Resources Information Center
Jones, Lyle V.; Wepman, Joseph M.
This word count is a composite listing of the different words spoken by a selected sample of 54 English-speaking adults and the frequency with which each of the different words was used in a particular test. The stimulus situation was identical for each subject and consisted of 20 cards of the Thematic Apperception Test. Although most word counts…
Chen, Yi-Chuan; Spence, Charles
2018-04-30
We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Rämä, Pia; Sirri, Louah; Serres, Josette
2013-04-01
Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.
Neural Processing of Spoken Words in Specific Language Impairment and Dyslexia
ERIC Educational Resources Information Center
Helenius, Paivi; Parviainen, Tiina; Paetau, Ritva; Salmelin, Riitta
2009-01-01
Young adults with a history of specific language impairment (SLI) differ from reading-impaired (dyslexic) individuals in terms of limited vocabulary and poor verbal short-term memory. Phonological short-term memory has been shown to play a significant role in learning new words. We investigated the neural signatures of auditory word recognition…
Iconic Factors and Language Word Order
ERIC Educational Resources Information Center
Moeser, Shannon Dawn
1975-01-01
College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)
Meyer, Ted A.; Pisoni, David B.
2012-01-01
Objective The Phonetically Balanced Kindergarten (PBK) Test (Haskins, Reference Note 2) has been used for almost 50 yr to assess spoken word recognition performance in children with hearing impairments. The test originally consisted of four lists of 50 words, but only three of the lists (lists 1, 3, and 4) were considered “equivalent” enough to be used clinically with children. Our goal was to determine if the lexical properties of the different PBK lists could explain any differences between the three “equivalent” lists and the fourth PBK list (List 2) that has not been used in clinical testing. Design Word frequency and lexical neighborhood frequency and density measures were obtained from a computerized database for all of the words on the four lists from the PBK Test as well as the words from a single PB-50 (Egan, 1948) word list. Results The words in the “easy” PBK list (List 2) were of higher frequency than the words in the three “equivalent” lists. Moreover, the lexical neighborhoods of the words on the “easy” list contained fewer phonetically similar words than the neighborhoods of the words on the other three “equivalent” lists. Conclusions It is important for researchers to consider word frequency and lexical neighborhood frequency and density when constructing word lists for testing speech perception. The results of this computational analysis of the PBK Test provide additional support for the proposal that spoken words are recognized “relationally” in the context of other phonetically similar words in the lexicon. Implications of using open-set word recognition tests with children with hearing impairments are discussed with regard to the specific vocabulary and information processing demands of the PBK Test. PMID:10466571
Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words
Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.
2012-01-01
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774
Learning and Consolidation of Novel Spoken Words
ERIC Educational Resources Information Center
Davis, Matthew H.; Di Betta, Anna Maria; Macdonald, Mark J. E.; Gaskell, Gareth
2009-01-01
Two experiments explored the neural mechanisms underlying the learning and consolidation of novel spoken words. In Experiment 1, participants learned two sets of novel words on successive days. A subsequent recognition test revealed high levels of familiarity for both sets. However, a lexical decision task showed that only novel words learned on…
Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes
ERIC Educational Resources Information Center
Dich, Nadya
2014-01-01
A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…
Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition
ERIC Educational Resources Information Center
Sulpizio, Simone; McQueen, James M.
2012-01-01
In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…
ERIC Educational Resources Information Center
Dang, Thi Ngoc Yen; Coxhead, Averil; Webb, Stuart
2017-01-01
The linguistic features of academic spoken English are different from those of academic written English. Therefore, for this study, an Academic Spoken Word List (ASWL) was developed and validated to help second language (L2) learners enhance their comprehension of academic speech in English-medium universities. The ASWL contains 1,741 word…
Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
Chen, Yi-Chuan; Spence, Charles
2011-10-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.
Shuai, Lan; Malins, Jeffrey G
2017-02-01
Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.
ERIC Educational Resources Information Center
Dymoke, Sue
2017-01-01
This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…
The time course of spoken word learning and recognition: studies with artificial lexicons.
Magnuson, James S; Tanenhaus, Michael K; Aslin, Richard N; Dahan, Delphine
2003-06-01
The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.
Alt, Mary; Gutmann, Michelle L
2009-01-01
This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.
The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening
Cibelli, Emily S.; Leonard, Matthew K.; Johnson, Keith; Chang, Edward F.
2015-01-01
Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge. PMID:26072003
Eiesland, Eli Anne; Lind, Marianne
2012-03-01
Compounds are words that are made up of at least two other words (lexemes), featuring lexical and syntactic characteristics and thus particularly interesting for the study of language processing. Most studies of compounds and language processing have been based on data from experimental single word production and comprehension tasks. To enhance the ecological validity of morphological processing research, data from other contexts, such as discourse production, need to be considered. This study investigates the production of nominal compounds in semi-spontaneous spoken texts by a group of speakers with fluent types of aphasia compared to a group of neurologically healthy speakers. The speakers with aphasia produce significantly fewer nominal compound types in their texts than the non-aphasic speakers, and the compounds they produce exhibit fewer different types of semantic relations than the compounds produced by the non-aphasic speakers. The results are discussed in relation to theories of language processing.
The Developing Role of Prosody in Novel Word Interpretation
ERIC Educational Resources Information Center
Herold, Debora S.; Nygaard, Lynne C.; Chicos, Kelly A.; Namy, Laura L.
2011-01-01
This study examined whether children use prosodic correlates to word meaning when interpreting novel words. For example, do children infer that a word spoken in a deep, slow, loud voice refers to something larger than a word spoken in a high, fast, quiet voice? Participants were 4- and 5-year-olds who viewed picture pairs that varied along a…
Emotion-Memory Effects in Bilingual Speakers: A Levels-of-Close Processing Approach
ERIC Educational Resources Information Center
Aycicegi-Dinn, Ayse; Caldwell-Harris, Catherine L.
2009-01-01
Emotion-memory effects occur when emotion words are more frequently recalled than neutral words. Bilingual speakers report that taboo terms and emotional phrases generate a stronger emotional response when heard or spoken in their first language. This suggests that the basic emotion-memory will be stronger for words presented in a first language.…
ERIC Educational Resources Information Center
Friedrich, Claudia K.; Lahiri, Aditi; Eulitz, Carsten
2008-01-01
How does the mental lexicon cope with phonetic variants in recognition of spoken words? Using a lexical decision task with and without fragment priming, the authors compared the processing of German words and pseudowords that differed only in the place of articulation of the initial consonant (place). Across both experiments, event-related brain…
The Effect of Background Noise on the Word Activation Process in Nonnative Spoken-Word Recognition
ERIC Educational Resources Information Center
Scharenborg, Odette; Coumans, Juul M. J.; van Hout, Roeland
2018-01-01
This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on…
An investigation of phonology and orthography in spoken-word recognition.
Slowiaczek, Louisa M; Soltano, Emily G; Wieting, Shani J; Bishop, Karyn L
2003-02-01
The possible influence of initial phonological and/or orthographic information on spoken-word processing was examined in six experiments modelled after and extending the work Jakimik, Cole, and Rudnicky (1985). Following Jakimik et al., Experiment 1 used polysyllabic primes with monosyllabic targets (e.g., BUCKLE-BUCK/[symbol: see text]; MYSTERY-MISS,/[symbol: see text]). Experiments 2, 3, and 4 used polysyllabic primes and polysyllabic targets whose initial syllables shared phonological information (e.g., NUISANCE-NOODLE,/[symbol: see text]), orthographic information (e.g., RATIO-RATIFY,/[symbol: see text]), both (e.g., FUNNEL-FUNNY,/[symbol: see text]), or were unrelated (e.g., SERMON-NOODLE,/[symbol: see text]). Participants engaged in a lexical decision (Experiments 1, 3, and 4) or a shadowing (Experiment 2) task with a single-trial (Experiments 2 and 3) or subsequent-trial (Experiments 1 and 4) priming procedure. Experiment 5 tested primes and targets that varied in the number of shared graphemes while holding shared phonemes constant at one. Experiment 6 used the procedures of Experiment 2 but a low proportion of related trials. Results revealed that response times were facilitated for prime-target pairs that shared initial phonological and orthographic information. These results were confirmed under conditions when strategic processing was greatly reduced suggesting that phonological and orthographic information is automatically activated during spoken-word processing.
English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition
Poellmann, Katja; Kong, Ying-Yee
2017-01-01
Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135
The role of syllabic structure in French visual word recognition.
Rouibah, A; Taft, M
2001-03-01
Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.
The locus of word frequency effects in skilled spelling-to-dictation.
Chua, Shi Min; Liow, Susan J Rickard
2014-01-01
In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.
ERIC Educational Resources Information Center
Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.
2012-01-01
The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…
Effects of Aging and Noise on Real-Time Spoken Word Recognition: Evidence from Eye Movements
ERIC Educational Resources Information Center
Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.
2011-01-01
Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…
Her Voice Lingers on and Her Memory Is Strategic: Effects of Gender on Directed Forgetting
Yang, Hwajin; Yang, Sujin; Park, Giho
2013-01-01
The literature on directed forgetting has employed exclusively visual words. Thus, the potentially interesting aspects of a spoken utterance, which include not only vocal cues (e.g., prosody) but also the speaker and the listener, have been neglected. This study demonstrates that prosody alone does not influence directed-forgetting effects, while the sex of the speaker and the listener significantly modulate directed-forgetting effects for spoken utterances. Specifically, forgetting costs were attenuated for female-spoken items compared to male-spoken items, and forgetting benefits were eliminated among female listeners but not among male listeners. These results suggest that information conveyed in a female voice draws attention to its distinct perceptual attributes, thus interfering with retention of the semantic meaning, while female listeners' superior capacity for processing the surface features of spoken utterances may predispose them to spontaneously employ adaptive strategies to retain content information despite distraction by perceptual features. Our findings underscore the importance of sex differences when processing spoken messages in directed forgetting. PMID:23691141
Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.
ERIC Educational Resources Information Center
Burton, John K.; Bruning, Roger H.
1982-01-01
Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…
Communicating Emotion: Linking Affective Prosody and Word Meaning
ERIC Educational Resources Information Center
Nygaard, Lynne C.; Queen, Jennifer S.
2008-01-01
The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming…
ERIC Educational Resources Information Center
Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.
2015-01-01
Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…
ERIC Educational Resources Information Center
Cohen-Goldberg, Ariel M.
2012-01-01
Theories of spoken production have not specifically addressed whether the phonemes of a word compete with each other for selection during phonological encoding (e.g., whether /t/ competes with /k/ in cat). Spoken production theories were evaluated and found to fall into three classes, theories positing (1) no competition, (2) competition among…
Interference from related actions in spoken word production: Behavioural and fMRI evidence.
de Zubicaray, Greig; Fraser, Douglas; Ramajoo, Kori; McMahon, Katie
2017-02-01
Few investigations of lexical access in spoken word production have investigated the cognitive and neural mechanisms involved in action naming. These are likely to be more complex than the mechanisms involved in object naming, due to the ways in which conceptual features of action words are represented. The present study employed a blocked cyclic naming paradigm to examine whether related action contexts elicit a semantic interference effect akin to that observed with categorically related objects. Participants named pictures of intransitive actions to avoid a confound with object processing. In Experiment 1, body-part related actions (e.g., running, walking, skating, hopping) were named significantly slower compared to unrelated actions (e.g., laughing, running, waving, hiding). Experiment 2 employed perfusion functional Magnetic Resonance Imaging (fMRI) to investigate the neural mechanisms involved in this semantic interference effect. Compared to unrelated actions, naming related actions elicited significant perfusion signal increases in frontotemporal cortex, including bilateral inferior frontal gyrus (IFG) and hippocampus, and decreases in bilateral posterior temporal, occipital and parietal cortices, including intraparietal sulcus (IPS). The findings demonstrate a role for temporoparietal cortex in conceptual-lexical processing of intransitive action knowledge during spoken word production, and support the proposed involvement of interference resolution and incremental learning mechanisms in the blocked cyclic naming paradigm. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cross-modal representation of spoken and written word meaning in left pars triangularis.
Liuzzi, Antonietta Gabriella; Bruffaerts, Rose; Peeters, Ronald; Adamczuk, Katarzyna; Keuleers, Emmanuel; De Deyne, Simon; Storms, Gerrit; Dupont, Patrick; Vandenberghe, Rik
2017-04-15
The correspondence in meaning extracted from written versus spoken input remains to be fully understood neurobiologically. Here, in a total of 38 subjects, the functional anatomy of cross-modal semantic similarity for concrete words was determined based on a dual criterion: First, a voxelwise univariate analysis had to show significant activation during a semantic task (property verification) performed with written and spoken concrete words compared to the perceptually matched control condition. Second, in an independent dataset, in these clusters, the similarity in fMRI response pattern to two distinct entities, one presented as a written and the other as a spoken word, had to correlate with the similarity in meaning between these entities. The left ventral occipitotemporal transition zone and ventromedial temporal cortex, retrosplenial cortex, pars orbitalis bilaterally, and the left pars triangularis were all activated in the univariate contrast. Only the left pars triangularis showed a cross-modal semantic similarity effect. There was no effect of phonological nor orthographic similarity in this region. The cross-modal semantic similarity effect was confirmed by a secondary analysis in the cytoarchitectonically defined BA45. A semantic similarity effect was also present in the ventral occipital regions but only within the visual modality, and in the anterior superior temporal cortex only within the auditory modality. This study provides direct evidence for the coding of word meaning in BA45 and positions its contribution to semantic processing at the confluence of input-modality specific pathways that code for meaning within the respective input modalities. Copyright © 2017 Elsevier Inc. All rights reserved.
Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P.; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik
2016-01-01
Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language. PMID:26834665
Adults' Self-Directed Learning of an Artificial Lexicon: The Dynamics of Neighborhood Reorganization
ERIC Educational Resources Information Center
Bardhan, Neil Prodeep
2010-01-01
Artificial lexicons have previously been used to examine the time course of the learning and recognition of spoken words, the role of segment type in word learning, and the integration of context during spoken word recognition. However, in all of these studies the experimenter determined the frequency and order of the words to be learned. In three…
ERIC Educational Resources Information Center
Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.
2011-01-01
Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…
English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition
ERIC Educational Resources Information Center
Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee
2017-01-01
Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…
Tracking speech comprehension in space and time.
Pulvermüller, Friedemann; Shtyrov, Yury; Ilmoniemi, Risto J; Marslen-Wilson, William D
2006-07-01
A fundamental challenge for the cognitive neuroscience of language is to capture the spatio-temporal patterns of brain activity that underlie critical functional components of the language comprehension process. We combine here psycholinguistic analysis, whole-head magnetoencephalography (MEG), the Mismatch Negativity (MMN) paradigm, and state-of-the-art source localization techniques (Equivalent Current Dipole and L1 Minimum-Norm Current Estimates) to locate the process of spoken word recognition at a specific moment in space and time. The magnetic MMN to words presented as rare "deviant stimuli" in an oddball paradigm among repetitive "standard" speech stimuli, peaked 100-150 ms after the information in the acoustic input, was sufficient for word recognition. The latency with which words were recognized corresponded to that of an MMN source in the left superior temporal cortex. There was a significant correlation (r = 0.7) of latency measures of word recognition in individual study participants with the latency of the activity peak of the superior temporal source. These results demonstrate a correspondence between the behaviorally determined recognition point for spoken words and the cortical activation in left posterior superior temporal areas. Both the MMN calculated in the classic manner, obtained by subtracting standard from deviant stimulus response recorded in the same experiment, and the identity MMN (iMMN), defined as the difference between the neuromagnetic responses to the same stimulus presented as standard and deviant stimulus, showed the same significant correlation with word recognition processes.
Effects of lexical competition on immediate memory span for spoken words.
Goh, Winston D; Pisoni, David B
2003-08-01
Current theories and models of the structural organization of verbal short-term memory are primarily based on evidence obtained from manipulations of features inherent in the short-term traces of the presented stimuli, such as phonological similarity. In the present study, we investigated whether properties of the stimuli that are not inherent in the short-term traces of spoken words would affect performance in an immediate memory span task. We studied the lexical neighbourhood properties of the stimulus items, which are based on the structure and organization of words in the mental lexicon. The experiments manipulated lexical competition by varying the phonological neighbourhood structure (i.e., neighbourhood density and neighbourhood frequency) of the words on a test list while controlling for word frequency and intra-set phonological similarity (family size). Immediate memory span for spoken words was measured under repeated and nonrepeated sampling procedures. The results demonstrated that lexical competition only emerged when a nonrepeated sampling procedure was used and the participants had to access new words from their lexicons. These findings were not dependent on individual differences in short-term memory capacity. Additional results showed that the lexical competition effects did not interact with proactive interference. Analyses of error patterns indicated that item-type errors, but not positional errors, were influenced by the lexical attributes of the stimulus items. These results complement and extend previous findings that have argued for separate contributions of long-term knowledge and short-term memory rehearsal processes in immediate verbal serial recall tasks.
Positive Emotional Language in the Final Words Spoken Directly Before Execution
Hirschmüller, Sarah; Egloff, Boris
2016-01-01
How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135
Liebenthal, Einat; Silbersweig, David A.; Stern, Emily
2016-01-01
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala—a subcortical center for emotion perception—are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states. PMID:27877106
Liebenthal, Einat; Silbersweig, David A; Stern, Emily
2016-01-01
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
Bertels, Julie; Kolinsky, Régine
2016-09-01
Although the influence of the emotional content of stimuli on attention has been considered as occurring within trial, recent studies revealed that the presentation of such stimuli would also involve a slow component. The aim of the present study was to investigate fast and slow effects of negative (Exp. 1) and taboo (Exp. 2) spoken words. For this purpose, we used an auditory variant of the emotional Stroop paradigm in which each emotional word was followed by a sequence of neutral words. Replicating results from our previous study, we observed slow but no fast effects of negative and taboo words, which we interpreted as reflecting difficulties to disengage attention from their emotional dimension. Interestingly, while the presentation of a negative word only delayed the processing of the immediately subsequent neutral word, slow effects of taboo words were long-lasting. Nevertheless, such attentional effects were only observed when the emotional words were presented in the first block of trials, suggesting that once participants develop strategies to perform the task, attention-grabbing effects of emotional words disappear. Hence, far from being automatic, the occurrence of these effects would depend on participants' attentional set.
Speaker information affects false recognition of unstudied lexical-semantic associates.
Luthra, Sahil; Fox, Neal P; Blumstein, Sheila E
2018-05-01
Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker's identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items. In Experiment 1, false recognition of semantically associated critical intruders was sensitive to speaker information, though only when subjects attended to talker identity during encoding. Results from Experiment 2 also provide some evidence that talker information affects the false recognition of critical intruders. Taken together, the present findings indicate that indexical information is able to contact the lexical-semantic network to affect the processing of unheard words.
Production Is Only Half the Story - First Words in Two East African Languages.
Alcock, Katherine J
2017-01-01
Theories of early learning of nouns in children's vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8-20 months were interviewed using Communicative Development Inventories that assess infants' first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75-95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children's spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children's comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language.
Production Is Only Half the Story — First Words in Two East African Languages
Alcock, Katherine J.
2017-01-01
Theories of early learning of nouns in children’s vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8–20 months were interviewed using Communicative Development Inventories that assess infants’ first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75–95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children’s spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children’s comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language. PMID:29163280
The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions
ERIC Educational Resources Information Center
Brouwer, Susanne; Bradlow, Ann R.
2016-01-01
This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…
Probabilistic Phonotactics as a Cue for Recognizing Spoken Cantonese Words in Speech
ERIC Educational Resources Information Center
Yip, Michael C. W.
2017-01-01
Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using…
Spoken Word Recognition of Chinese Words in Continuous Speech
ERIC Educational Resources Information Center
Yip, Michael C. W.
2015-01-01
The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…
Orthographic Facilitation Effects on Spoken Word Production: Evidence from Chinese
ERIC Educational Resources Information Center
Zhang, Qingfang; Weekes, Brendan Stuart
2009-01-01
The aim of this experiment was to investigate the time course of orthographic facilitation on picture naming in Chinese. We used a picture-word paradigm to investigate orthographic and phonological facilitation on monosyllabic spoken word production in native Mandarin speakers. Both the stimulus-onset asynchrony (SOA) and the picture-word…
L2 Gender Facilitation and Inhibition in Spoken Word Recognition
ERIC Educational Resources Information Center
Behney, Jennifer N.
2011-01-01
This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…
A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition
ERIC Educational Resources Information Center
Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
2015-01-01
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Petrova, Ana; Gaskell, M. Gareth; Ferrand, Ludovic
2011-01-01
Many studies have repeatedly shown an orthographic consistency effect in the auditory lexical decision task. Words with phonological rimes that could be spelled in multiple ways (i.e., inconsistent words) typically produce longer auditory lexical decision latencies and more errors than do words with rimes that could be spelled in only one way (i.e., consistent words). These results have been extended to different languages and tasks, suggesting that the effect is quite general and robust. Despite this growing body of evidence, some psycholinguists believe that orthographic effects on spoken language are exclusively strategic, post-lexical, or restricted to peculiar (low-frequency) words. In the present study, we manipulated consistency and word-frequency orthogonally in order to explore whether the orthographic consistency effect extends to high-frequency words. Two different tasks were used: lexical decision and rime detection. Both tasks produced reliable consistency effects for both low- and high-frequency words. Furthermore, in Experiment 1 (lexical decision), an interaction revealed a stronger consistency effect for low-frequency words than for high-frequency words, as initially predicted by Ziegler and Ferrand (1998), whereas no interaction was found in Experiment 2 (rime detection). Our results extend previous findings by showing that the orthographic consistency effect is obtained not only for low-frequency words but also for high-frequency words. Furthermore, these effects were also obtained in a rime detection task, which does not require the explicit processing of orthographic structure. Globally, our results suggest that literacy changes the way people process spoken words, even for frequent words. PMID:22025916
The impact of music on learning and consolidation of novel words.
Tamminen, Jakke; Rastle, Kathleen; Darby, Jess; Lucas, Rebecca; Williamson, Victoria J
2017-01-01
Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.
Long-term temporal tracking of speech rate affects spoken-word recognition.
Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin
2014-08-01
Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.
Semantic and phonological schema influence spoken word learning and overnight consolidation.
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
2018-06-01
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Pisoni, David B.; Cleary, Miranda
2012-01-01
Large individual differences in spoken word recognition performance have been found in deaf children after cochlear implantation. Recently, Pisoni and Geers (2000) reported that simple forward digit span measures of verbal working memory were significantly correlated with spoken word recognition scores even after potentially confounding variables were statistically controlled for. The present study replicates and extends these initial findings to the full set of 176 participants in the CID cochlear implant study. The pooled data indicate that despite statistical “partialling-out” of differences in chronological age, communication mode, duration of deafness, duration of device use, age at onset of deafness, number of active electrodes, and speech feature discrimination, significant correlations still remain between digit span and several measures of spoken word recognition. Strong correlations were also observed between speaking rate and both forward and backward digit span, a result that is similar to previously reported findings in normalhearing adults and children. The results suggest that perhaps as much as 20% of the currently unexplained variance in spoken word recognition scores may be independently accounted for by individual differences in cognitive factors related to the speed and efficiency with which phonological and lexical representations of spoken words are maintained in and retrieved from working memory. A smaller percentage, perhaps about 7% of the currently unexplained variance in spoken word recognition scores, may be accounted for in terms of working memory capacity. We discuss how these relationships may arise and their contribution to subsequent speech and language development in prelingually deaf children who use cochlear implants. PMID:12612485
Online Lexical Competition during Spoken Word Recognition and Word Learning in Children and Adults
ERIC Educational Resources Information Center
Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth
2013-01-01
Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children…
ERIC Educational Resources Information Center
Singh, Leher; Tan, Aloysia; Wewalaarachchi, Thilanga D.
2017-01-01
Children undergo gradual progression in their ability to differentiate correct and incorrect pronunciations of words, a process that is crucial to establishing a native vocabulary. For the most part, the development of mature phonological representations has been researched by investigating children's sensitivity to consonant and vowel variation,…
Modeling the Control of Phonological Encoding in Bilingual Speakers
ERIC Educational Resources Information Center
Roelofs, Ardi; Verhoef, Kim
2006-01-01
Phonological encoding is the process by which speakers retrieve phonemic segments for morphemes from memory and use the segments to assemble phonological representations of words to be spoken. When conversing in one language, bilingual speakers have to resist the temptation of encoding word forms using the phonological rules and representations of…
Distraction Control Processes in Free Recall: Benefits and Costs to Performance
ERIC Educational Resources Information Center
Marsh, John E.; Sörqvist, Patrik; Hodgetts, Helen M.; Beaman, C. Philip; Jones, Dylan M.
2015-01-01
How is semantic memory influenced by individual differences under conditions of distraction? This question was addressed by observing how participants recalled visual target words-drawn from a single category-while ignoring spoken distractor words that were members of either the same or a different (single) category. Working memory capacity (WMC)…
High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy
ERIC Educational Resources Information Center
Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano
2012-01-01
Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…
Elbro, C; Nielsen, I; Petersen, D K
1994-01-01
Difficulties in reading and language skills which persist from childhood into adult life are the concerns of this article. The aims were twofold: (1) to find measures of adult reading processes that validate adults' retrospective reports of difficulties in learning to read during the school years, and (2) to search for indications of basic deficits in phonological processing that may point toward underlying causes of reading difficulties. Adults who reported a history of difficulties in learning to read (n=102) were distinctly disabled in phonological coding in reading, compared to adults without similar histories (n=56). They were less disabled in the comprehension of written passages, and the comprehension disability was explained by the phonological difficulties. A number of indications were found that adults with poor phonological coding skills in reading (i.e., dyslexia) have basic deficits in phonological representations of spoken words, even when semantic word knowledge, phonemic awareness, educational level, and daily reading habits are taken into account. It is suggested that dyslexics possess less distinct phonological representations of spoken words.
Phonological Neighborhood Effects in Spoken Word Production: An fMRI Study
ERIC Educational Resources Information Center
Peramunage, Dasun; Blumstein, Sheila E.; Myers, Emily B.; Goldrick, Matthew; Baese-Berk, Melissa
2011-01-01
The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the…
Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?
ERIC Educational Resources Information Center
Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.
2013-01-01
Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…
ERIC Educational Resources Information Center
Malins, Jeffrey G.; Joanisse, Marc F.
2010-01-01
We used eyetracking to examine how tonal versus segmental information influence spoken word recognition in Mandarin Chinese. Participants heard an auditory word and were required to identify its corresponding picture from an array that included the target item ("chuang2" "bed"), a phonological competitor (segmental: chuang1 "window"; cohort:…
Shtyrov, Yury; Osswald, Katja; Pulvermüller, Friedemann
2008-01-01
The mismatch negativity response, considered a brain correlate of automatic preattentive auditory processing, is enhanced for word stimuli as compared with acoustically matched pseudowords. This lexical enhancement, taken as a signature of activation of language-specific long-term memory traces, was investigated here using functional magnetic resonance imaging to complement the previous electrophysiological studies. In passive oddball paradigm, word stimuli were randomly presented as rare deviants among frequent pseudowords; the reverse conditions employed infrequent pseudowords among word stimuli. Random-effect analysis indicated clearly distinct patterns for the different lexical types. Whereas the hemodynamic mismatch response was significant for the word deviants, it did not reach significance for the pseudoword conditions. This difference, more pronounced in the left than right hemisphere, was also assessed by analyzing average parameter estimates in regions of interests within both temporal lobes. A significant hemisphere-by-lexicality interaction confirmed stronger blood oxygenation level-dependent mismatch responses to words than pseudowords in the left but not in the right superior temporal cortex. The increased left superior temporal activation and the laterality of cortical sources elicited by spoken words compared with pseudowords may indicate the activation of cortical circuits for lexical material even in passive oddball conditions and suggest involvement of the left superior temporal areas in housing such word-processing neuronal circuits.
Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind
Burton, Harold; Sinclair, Robert J.; Agato, Alvin
2012-01-01
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836
Recognition memory for Braille or spoken words: an fMRI study in early blind.
Burton, Harold; Sinclair, Robert J; Agato, Alvin
2012-02-15
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.
Intelligibility of emotional speech in younger and older adults.
Dupuis, Kate; Pichora-Fuller, M Kathleen
2014-01-01
Little is known about the influence of vocal emotions on speech understanding. Word recognition accuracy for stimuli spoken to portray seven emotions (anger, disgust, fear, sadness, neutral, happiness, and pleasant surprise) was tested in younger and older listeners. Emotions were presented in either mixed (heterogeneous emotions mixed in a list) or blocked (homogeneous emotion blocked in a list) conditions. Three main hypotheses were tested. First, vocal emotion affects word recognition accuracy; specifically, portrayals of fear enhance word recognition accuracy because listeners orient to threatening information and/or distinctive acoustical cues such as high pitch mean and variation. Second, older listeners recognize words less accurately than younger listeners, but the effects of different emotions on intelligibility are similar across age groups. Third, blocking emotions in list results in better word recognition accuracy, especially for older listeners, and reduces the effect of emotion on intelligibility because as listeners develop expectations about vocal emotion, the allocation of processing resources can shift from emotional to lexical processing. Emotion was the within-subjects variable: all participants heard speech stimuli consisting of a carrier phrase followed by a target word spoken by either a younger or an older talker, with an equal number of stimuli portraying each of seven vocal emotions. The speech was presented in multi-talker babble at signal to noise ratios adjusted for each talker and each listener age group. Listener age (younger, older), condition (mixed, blocked), and talker (younger, older) were the main between-subjects variables. Fifty-six students (Mage= 18.3 years) were recruited from an undergraduate psychology course; 56 older adults (Mage= 72.3 years) were recruited from a volunteer pool. All participants had clinically normal pure-tone audiometric thresholds at frequencies ≤3000 Hz. There were significant main effects of emotion, listener age group, and condition on the accuracy of word recognition in noise. Stimuli spoken in a fearful voice were the most intelligible, while those spoken in a sad voice were the least intelligible. Overall, word recognition accuracy was poorer for older than younger adults, but there was no main effect of talker, and the pattern of the effects of different emotions on intelligibility did not differ significantly across age groups. Acoustical analyses helped elucidate the effect of emotion and some intertalker differences. Finally, all participants performed better when emotions were blocked. For both groups, performance improved over repeated presentations of each emotion in both blocked and mixed conditions. These results are the first to demonstrate a relationship between vocal emotion and word recognition accuracy in noise for younger and older listeners. In particular, the enhancement of intelligibility by emotion is greatest for words spoken to portray fear and presented heterogeneously with other emotions. Fear may have a specialized role in orienting attention to words heard in noise. This finding may be an auditory counterpart to the enhanced detection of threat information in visual displays. The effect of vocal emotion on word recognition accuracy is preserved in older listeners with good audiograms and both age groups benefit from blocking and the repetition of emotions.
Banzina, Elina; Dilley, Laura C; Hewitt, Lynne E
2016-08-01
The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found, likely due to a transfer of native phonological features. Next, a cross-modal phonological priming technique combined with a lexical decision task assessed the effect of inaccurate SS and UU syllable productions on native American English listeners' speech processing. Inaccurate UU vowels led to significant inhibition of lexical access, while reduced SS vowels revealed less interference. The results have implications for understanding the role of SS and UU syllables for word recognition and English pronunciation instruction.
Accelerating Receptive Language Acquisition in Kindergarten Students: An Action Research Study
ERIC Educational Resources Information Center
Hewitt, Christine L.
2013-01-01
Receptive language skills allow students to understand the meaning of words spoken to them. When students are unable to comprehend the majority of the words that are spoken to them, they do not have the ability to act on those words, follow given directions, build on prior knowledge, or construct adequate meaning. The inability to understand the…
ERIC Educational Resources Information Center
Casey, Laura Baylot; Bicard, David F.
2009-01-01
Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…
Reconsidering the role of temporal order in spoken word recognition.
Toscano, Joseph C; Anderson, Nathaniel D; McMurray, Bob
2013-10-01
Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.
Interpreting Chicken-Scratch: Lexical Access for Handwritten Words
Barnhart, Anthony S.; Goldinger, Stephen D.
2014-01-01
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word recognition. The current study examined the effects of handwriting on a series of lexical variables thought to influence bottom-up and top-down processing, including word frequency, regularity, bidirectional consistency, and imageability. The results suggest that the natural physical ambiguity of handwritten stimuli forces a greater reliance on top-down processes, because almost all effects were magnified, relative to conditions with computer print. These findings suggest that processes of word perception naturally adapt to handwriting, compensating for physical ambiguity by increasing top-down feedback. PMID:20695708
Singh, Niharika; Mishra, Ramesh Kumar
2015-01-01
Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain. PMID:25775184
ERIC Educational Resources Information Center
Hicks, Emily D.
2004-01-01
The cultural activities, including the performance of music and spoken word are documented. The cultural activities in the San Diego-Tijuana region that is described is emerged from rhizomatic, transnational points of contact.
Neural correlates of successful semantic processing during propofol sedation.
Adapa, Ram M; Davis, Matthew H; Stamatakis, Emmanuel A; Absalom, Anthony R; Menon, David K
2014-07-01
Sedation has a graded effect on brain responses to auditory stimuli: perceptual processing persists at sedation levels that attenuate more complex processing. We used fMRI in healthy volunteers sedated with propofol to assess changes in neural responses to spoken stimuli. Volunteers were scanned awake, sedated, and during recovery, while making perceptual or semantic decisions about nonspeech sounds or spoken words respectively. Sedation caused increased error rates and response times, and differentially affected responses to words in the left inferior frontal gyrus (LIFG) and the left inferior temporal gyrus (LITG). Activity in LIFG regions putatively associated with semantic processing, was significantly reduced by sedation despite sedated volunteers continuing to make accurate semantic decisions. Instead, LITG activity was preserved for words greater than nonspeech sounds and may therefore be associated with persistent semantic processing during the deepest levels of sedation. These results suggest functionally distinct contributions of frontal and temporal regions to semantic decision making. These results have implications for functional imaging studies of language, for understanding mechanisms of impaired speech comprehension in postoperative patients with residual levels of anesthetic, and may contribute to the development of frameworks against which EEG based monitors could be calibrated to detect awareness under anesthesia. Copyright © 2013 Wiley Periodicals, Inc.
Finding Relevant Data in a Sea of Languages
2016-04-26
full machine-translated text , unbiased word clouds , query-biased word clouds , and query-biased sentence...and information retrieval to automate language processing tasks so that the limited number of linguists available for analyzing text and spoken...the crime (stock market). The Cross-LAnguage Search Engine (CLASE) has already preprocessed the documents, extracting text to identify the language
Conway, Christopher M.; Deocampo, Joanne A.; Walk, Anne M.; Anaya, Esperanza M.; Pisoni, David B.
2015-01-01
Purpose The authors investigated the ability of deaf children with cochlear implants (CIs) to use sentence context to facilitate the perception of spoken words. Method Deaf children with CIs (n = 24) and an age-matched group of children with normal hearing (n = 31) were presented with lexically controlled sentences and were asked to repeat each sentence in its entirety. Performance was analyzed at each of 3 word positions of each sentence (first, second, and third key word). Results Whereas the children with normal hearing showed robust effects of contextual facilitation—improved speech perception for the final words in a sentence—the deaf children with CIs on average showed no such facilitation. Regression analyses indicated that for the deaf children with CIs, Forward Digit Span scores significantly predicted accuracy scores for all 3 positions, whereas performance on the Stroop Color and Word Test, Children’s Version (Golden, Freshwater, & Golden, 2003) predicted how much contextual facilitation was observed at the final word. Conclusions The pattern of results suggests that some deaf children with CIs do not use sentence context to improve spoken word recognition. The inability to use sentence context may be due to possible interactions between language experience and cognitive factors that affect the ability to successfully integrate temporal–sequential information in spoken language. PMID:25029170
ERP correlates of motivating voices: quality of motivation and time-course matters
Zougkou, Konstantina; Weinstein, Netta
2017-01-01
Abstract Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. ‘You absolutely have to do it my way’ spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. ‘Why don’t we meet again tomorrow’ spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms. PMID:28525641
ERP correlates of motivating voices: quality of motivation and time-course matters.
Zougkou, Konstantina; Weinstein, Netta; Paulmann, Silke
2017-10-01
Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. 'You absolutely have to do it my way' spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. 'Why don't we meet again tomorrow' spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms. © The Author (2017). Published by Oxford University Press.
Development of brain networks involved in spoken word processing of Mandarin Chinese.
Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R
2011-08-01
Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.
A preliminary study of subjective frequency estimates of words spoken in Cantonese.
Yip, M C
2001-06-01
A database is presented of the subjective frequency estimates for a set of 30 Chinese homophones. The estimates are based on analysis of responses from a simple listening task by 120 University students. On the listening task, they are asked to mention the first meaning thought of upon hearing a Chinese homophone by writing down the corresponding Chinese characters. There was correlation of .66 between the frequency of spoken and written words, suggesting distributional information about the lexical representations is generally independent of modality. These subjective frequency counts should be useful in the construction of material sets for research on word recognition using spoken Chinese (Cantonese).
Eye Movements Reveal Fast, Voice-Specific Priming
Papesh, Megan H.; Goldinger, Stephen D.; Hout, Michael C.
2015-01-01
In spoken word perception, voice specificity effects are well-documented: When people hear repeated words in some task, performance is generally better when repeated items are presented in their originally heard voices, relative to changed voices. A key theoretical question about voice specificity effects concerns their time-course: Some studies suggest that episodic traces exert their influence late in lexical processing (the time-course hypothesis; McLennan & Luce, 2005), whereas others suggest that episodic traces influence immediate, online processing. We report two eye-tracking studies investigating the time-course of voice-specific priming within and across cognitive tasks. In Experiment 1, participants performed modified lexical decision or semantic classification to words spoken by four speakers. The tasks required participants to click a red “×” or a blue “+” located randomly within separate visual half-fields, necessitating trial-by-trial visual search with consistent half-field response mapping. After a break, participants completed a second block with new and repeated items, half spoken in changed voices. Voice effects were robust very early, appearing in saccade initiation times. Experiment 2 replicated this pattern while changing tasks across blocks, ruling out a response priming account. In the General Discussion, we address the time-course hypothesis, focusing on the challenge it presents for empirical disconfirmation, and highlighting the broad importance of indexical effects, beyond studies of priming. PMID:26726911
Carey, Daniel; Mercure, Evelyne; Pizzioli, Fabrizio; Aydelott, Jennifer
2014-12-01
The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gentilucci, Maurizio; Bernardis, Paolo; Crisi, Girolamo; Dalla Volta, Riccardo
2006-07-01
The aim of the present study was to determine whether Broca's area is involved in translating some aspects of arm gesture representations into mouth articulation gestures. In Experiment 1, we applied low-frequency repetitive transcranial magnetic stimulation over Broca's area and over the symmetrical loci of the right hemisphere of participants responding verbally to communicative spoken words, to gestures, or to the simultaneous presentation of the two signals. We performed also sham stimulation over the left stimulation loci. In Experiment 2, we performed the same stimulations as in Experiment 1 to participants responding with words congruent and incongruent with gestures. After sham stimulation voicing parameters were enhanced when responding to communicative spoken words or to gestures as compared to a control condition of word reading. This effect increased when participants responded to the simultaneous presentation of both communicative signals. In contrast, voicing was interfered when the verbal responses were incongruent with gestures. The left stimulation neither induced enhancement on voicing parameters of words congruent with gestures nor interference on words incongruent with gestures. We interpreted the enhancement of the verbal response to gesturing in terms of intention to interact directly. Consequently, we proposed that Broca's area is involved in the process of translating into speech aspects concerning the social intention coded by the gesture. Moreover, we discussed the results in terms of evolution to support the theory [Corballis, M. C. (2002). From hand to mouth: The origins of language. Princeton, NJ: Princeton University Press] proposing spoken language as evolved from an ancient communication system using arm gestures.
Lexical Competition in Non-Native Spoken-Word Recognition
ERIC Educational Resources Information Center
Weber, Andrea; Cutler, Anne
2004-01-01
Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name ("pencil," given target "panda") than on less confusable distractors…
Context and Spoken Word Recognition in a Novel Lexicon
ERIC Educational Resources Information Center
Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.
2008-01-01
Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments,…
Visual Speech Primes Open-Set Recognition of Spoken Words
ERIC Educational Resources Information Center
Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.
2009-01-01
Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…
Shen, Wei; Qu, Qingqing; Tong, Xiuhong
2018-05-01
The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.
Christensen, Thomas A; Almryde, Kyle R; Fidler, Lesley J; Lockwood, Julie L; Antonucci, Sharon M; Plante, Elena
2012-01-01
Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as listeners heard a list of single English nouns. We then presented these words again in the context of a recognition task and assessed the effect of modulating attention at encoding on the BOLD responses to words that were either attended strongly, weakly, or not heard previously. MRI revealed activity in right-lateralized inferior parietal and prefrontal regions, and positive BOLD signals varied with the relative level of attention present at encoding. Temporal analysis of hemodynamic responses further showed that the time course of BOLD activity was modulated differentially by unintentionally encoded words compared to novel items. Our findings largely support current models of memory consolidation and retrieval, but they also provide fresh evidence for hemispheric differences and functional subdivisions in right frontoparietal attention networks that help shape auditory episodic recall.
Christensen, Thomas A.; Almryde, Kyle R.; Fidler, Lesley J.; Lockwood, Julie L.; Antonucci, Sharon M.; Plante, Elena
2012-01-01
Attention is crucial for encoding information into memory, and current dual-process models seek to explain the roles of attention in both recollection memory and incidental-perceptual memory processes. The present study combined an incidental memory paradigm with event-related functional MRI to examine the effect of attention at encoding on the subsequent neural activation associated with unintended perceptual memory for spoken words. At encoding, we systematically varied attention levels as listeners heard a list of single English nouns. We then presented these words again in the context of a recognition task and assessed the effect of modulating attention at encoding on the BOLD responses to words that were either attended strongly, weakly, or not heard previously. MRI revealed activity in right-lateralized inferior parietal and prefrontal regions, and positive BOLD signals varied with the relative level of attention present at encoding. Temporal analysis of hemodynamic responses further showed that the time course of BOLD activity was modulated differentially by unintentionally encoded words compared to novel items. Our findings largely support current models of memory consolidation and retrieval, but they also provide fresh evidence for hemispheric differences and functional subdivisions in right frontoparietal attention networks that help shape auditory episodic recall. PMID:22144982
The gender congruency effect during bilingual spoken-word recognition
Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa
2016-01-01
We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132
Levels of Phonology Related to Reading and Writing in Middle Childhood
ERIC Educational Resources Information Center
Del Campo, Roxana; Buchanan, William R.; Abbott, Robert D.; Berninger, Virginia W.
2015-01-01
The relationships of different levels of phonological processing (sounds in heard and spoken words for whole words, syllables, phonemes, and rimes) to multi-leveled functional reading or writing systems were studied. Participants in this cross-sectional study were students in fourth-grade (n = 119, mean age 116.5 months) and sixth-grade (n = 105,…
A Closer Look at Phonology as a Predictor of Spoken Sentence Processing and Word Reading
ERIC Educational Resources Information Center
Myers, Suzanne; Robertson, Erin K.
2015-01-01
The goal of this study was to tease apart the roles of phonological awareness (pA) and phonological short-term memory (pSTM) in sentence comprehension, sentence production, and word reading. Children 6- to 10-years of age (N = 377) completed standardized tests of pA ("Elision") and pSTM ("Nonword Repetition") from the…
NASA Astrophysics Data System (ADS)
Collison, Elizabeth A.; Munson, Benjamin; Carney, Arlene E.
2002-05-01
Recent research has attempted to identify the factors that predict speech perception performance among users of cochlear implants (CIs). Studies have found that approximately 20%-60% of the variance in speech perception scores can be accounted for by factors including duration of deafness, etiology, type of device, and length of implant use, leaving approximately 50% of the variance unaccounted for. The current study examines the extent to which vocabulary size and nonverbal cognitive ability predict CI listeners' spoken word recognition. Fifteen postlingually deafened adults with nucleus or clarion CIs were given standardized assessments of nonverbal cognitive ability and expressive vocabulary size: the Expressive Vocabulary Test, the Test of Nonverbal Intelligence-III, and the Woodcock-Johnson-III Test of Cognitive Ability, Verbal Comprehension subtest. Two spoken word recognition tasks were administered. In the first, listeners identified isophonemic CVC words. In the second, listeners identified gated words varying in lexical frequency and neighborhood density. Analyses will examine the influence of lexical frequency and neighborhood density on the uniqueness point in the gating task, as well as relationships among nonverbal cognitive ability, vocabulary size, and the two spoken word recognition measures. [Work supported by NIH Grant P01 DC00110 and by the Lions 3M Hearing Foundation.
Cohen-Goldberg, Ariel M.; Cholin, Joana; Miozzo, Michele; Rapp, Brenda
2013-01-01
Morphological and phonological processes are tightly interrelated in spoken production. During processing, morphological processes must combine the phonological content of individual morphemes to produce a phonological representation that is suitable for driving phonological processing. Further, morpheme assembly frequently causes changes in a word's phonological well-formedness that must be addressed by the phonology. We report the case of an aphasic individual (WRG) who exhibits an impairment at the morpho-phonological interface. WRG was tested on his ability to produce phonologically complex sequences (specifically, coda clusters of varying sonority) in heteromorphemic and tautomorphemic environments. WRG made phonological errors that reduced coda sonority complexity in multimorphemic words (e.g., passed→[pæstɪd]) but not in monomorphemic words (e.g., past). WRG also made similar insertion errors to repair stress clash in multimorphemic environments, confirming his sensitivity to cross-morpheme well-formedness. We propose that this pattern of performance is the result of an intact phonological grammar acting over the phonological content of morphemic representations that were weakly joined because of brain damage. WRG may constitute the first case of a morpho-phonological impairment—these results suggest that the processes that combine morphemes constitute a crucial component of morpho-phonological processing. PMID:23466641
NASA Technical Reports Server (NTRS)
1973-01-01
The development, construction, and test of a 100-word vocabulary near real time word recognition system are reported. Included are reasonable replacement of any one or all 100 words in the vocabulary, rapid learning of a new speaker, storage and retrieval of training sets, verbal or manual single word deletion, continuous adaptation with verbal or manual error correction, on-line verification of vocabulary as spoken, system modes selectable via verification display keyboard, relationship of classified word to neighboring word, and a versatile input/output interface to accommodate a variety of applications.
Experiments on Urdu Text Recognition
NASA Astrophysics Data System (ADS)
Mukhtar, Omar; Setlur, Srirangaraj; Govindaraju, Venu
Urdu is a language spoken in the Indian subcontinent by an estimated 130-270 million speakers. At the spoken level, Urdu and Hindi are considered dialects of a single language because of shared vocabulary and the similarity in grammar. At the written level, however, Urdu is much closer to Arabic because it is written in Nastaliq, the calligraphic style of the Persian-Arabic script. Therefore, a speaker of Hindi can understand spoken Urdu but may not be able to read written Urdu because Hindi is written in Devanagari script, whereas an Arabic writer can read the written words but may not understand the spoken Urdu. In this chapter we present an overview of written Urdu. Prior research in handwritten Urdu OCR is very limited. We present (perhaps) the first system for recognizing handwritten Urdu words. On a data set of about 1300 handwritten words, we achieved an accuracy of 70% for the top choice, and 82% for the top three choices.
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
Huettig, Falk; Altmann, Gerry T M
2005-05-01
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; ]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word 'piano' when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word 'piano' unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.
Bogon, Johanna; Eisenbarth, Hedwig; Landgraf, Steffen; Dreisbach, Gesine
2017-09-01
Vocal events offer not only semantic-linguistic content but also information about the identity and the emotional-motivational state of the speaker. Furthermore, most vocal events have implications for our actions and therefore include action-related features. But the relevance and irrelevance of vocal features varies from task to task. The present study investigates binding processes for perceptual and action-related features of spoken words and their modulation by the task representation of the listener. Participants reacted with two response keys to eight different words spoken by a male or a female voice (Experiment 1) or spoken by an angry or neutral male voice (Experiment 2). There were two instruction conditions: half of participants learned eight stimulus-response mappings by rote (SR), and half of participants applied a binary task rule (TR). In both experiments, SR instructed participants showed clear evidence for binding processes between voice and response features indicated by an interaction between the irrelevant voice feature and the response. By contrast, as indicated by a three-way interaction with instruction, no such binding was found in the TR instructed group. These results are suggestive of binding and shielding as two adaptive mechanisms that ensure successful communication and action in a dynamic social environment.
Reduction and elimination of format effects on recall.
Goolkasian, Paula; Foos, Paul W; Krusemark, Daniel C
2008-01-01
Two experiments investigated whether the recall advantage of pictures and spoken words over printed words in working memory (Foos & Goolkasian, 2005; Goolkasian & Foos, 2002) could be reduced by manipulating letter case and sequential versus simultaneous presentation. Participants were required to remember 3 or 6 items presented in varied presentation formats while verifying the accuracy of a sentence. Presenting words in alternating uppercase and lowercase improved recall, and presenting words simultaneously rather than successively removed the effect of presentation format. The findings suggest that when forcing participants to pay attention to printed words you can make them more memorable and thereby diminish or remove any disadvantage in the recall of printed words in comparison with pictures and spoken words.
A dual contribution to the involuntary semantic processing of unexpected spoken words.
Parmentier, Fabrice B R; Turner, Jacqueline; Perez, Laura
2014-02-01
Sounds are a major cause of distraction. Unexpected to-be-ignored auditory stimuli presented in the context of an otherwise repetitive acoustic background ineluctably break through selective attention and distract people from an unrelated visual task (deviance distraction). This involuntary capture of attention by deviant sounds has been hypothesized to trigger their semantic appraisal and, in some circumstances, interfere with ongoing performance, but it remains unclear how such processing compares with the automatic processing of distractors in classic interference tasks (e.g., Stroop, flanker, Simon tasks). Using a cross-modal oddball task, we assessed the involuntary semantic processing of deviant sounds in the presence and absence of deviance distraction. The results revealed that some involuntary semantic analysis of spoken distractors occurs in the absence of deviance distraction but that this processing is significantly greater in its presence. We conclude that the automatic processing of spoken distractors reflects 2 contributions, one that is contingent upon deviance distraction and one that is independent from it.
Learning and Consolidation of New Spoken Words in Autism Spectrum Disorder
ERIC Educational Resources Information Center
Henderson, Lisa; Powell, Anna; Gaskell, M. Gareth; Norbury, Courtenay
2014-01-01
Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words…
ERIC Educational Resources Information Center
Pompon, Rebecca Hunting; McNeil, Malcolm R.; Spencer, Kristie A.; Kendall, Diane L.
2015-01-01
Purpose: The integrity of selective attention in people with aphasia (PWA) is currently unknown. Selective attention is essential for everyday communication, and inhibition is an important part of selective attention. This study explored components of inhibition--both intentional and reactive inhibition--during spoken-word production in PWA and in…
Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity
ERIC Educational Resources Information Center
Chen, Yi-Chuan; Spence, Charles
2011-01-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…
Neurophysiology of speech differences in childhood apraxia of speech.
Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole
2014-01-01
Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.
Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine
2015-01-01
Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070
The role of grammatical category information in spoken word retrieval.
Duràn, Carolina Palma; Pillon, Agnesa
2011-01-01
We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production.
ERIC Educational Resources Information Center
Caplan, David; Waters, Gloria; Bertram, Julia; Ostrowski, Adam; Michaud, Jennifer
2016-01-01
The authors assessed 4,865 middle and high school students for the ability to recognize and understand written and spoken morphologically simple words, morphologically complex words, and the syntactic structure of sentences and for the ability to answer questions about facts presented in a written passage and to make inferences based on those…
Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda
2010-01-01
Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000
Immediate effects of anticipatory coarticulation in spoken-word recognition
Salverda, Anne Pier; Kleinschmidt, Dave; Tanenhaus, Michael K.
2014-01-01
Two visual-world experiments examined listeners’ use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as “The … ladder is the target”. With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200–240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (“ The ladder … is the target”). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article’s vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for “data explanation” approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. PMID:24511179
Kung, Carmen; Chwilla, Dorothee J; Schriefers, Herbert
2014-01-01
In two ERP experiments, we investigate the on-line interplay of lexical tone, intonation and semantic context during spoken word recognition in Cantonese Chinese. Experiment 1 shows that lexical tone and intonation interact immediately. Words with a low lexical tone at the end of questions (with a rising question intonation) lead to a processing conflict. This is reflected in a low accuracy in lexical identification and in a P600 effect compared to the same words at the end of a statement. Experiment 2 shows that a strongly biasing semantic context leads to much better lexical-identification performance for words with a low tone at the end of questions and to a disappearance of the P600 effect. These results support the claim that semantic context plays a major role in disentangling the tonal information from the intonational information, and thus, in resolving the on-line conflict between intonation and tone. However, the ERP data indicate that the introduction of a semantic context does not entirely eliminate on-line processing problems for words at the end of questions. This is revealed by the presence of an N400 effect for words with a low lexical tone and for words with a high-mid lexical tone at the end of questions. The ERP data thus show that, while semantic context helps in the eventual lexical identification, it makes the deviation of the contextually expected lexical tone from the actual acoustic signal more salient. © 2013 Published by Elsevier Ltd.
ERIC Educational Resources Information Center
McMurray, Bob; Tanenhaus, Michael K.; Aslin, Richard N.
2009-01-01
Spoken word recognition shows gradient sensitivity to within-category voice onset time (VOT), as predicted by several current models of spoken word recognition, including TRACE (McClelland, J., & Elman, J. (1986). The TRACE model of speech perception. "Cognitive Psychology," 18, 1-86). It remains unclear, however, whether this sensitivity is…
Call and Responsibility: Critical Questions for Youth Spoken Word Poetry
ERIC Educational Resources Information Center
Weinstein, Susan; West, Anna
2012-01-01
In this article, Susan Weinstein and Anna West embark on a critical analysis of the maturing field of youth spoken word poetry (YSW). Through a blend of firsthand experience, analysis of YSW-related films and television, and interview data from six years of research, the authors identify specific dynamics that challenge young poets as they…
"Context and Spoken Word Recognition in a Novel Lexicon": Correction
ERIC Educational Resources Information Center
Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.
2009-01-01
Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…
Eye Movements to Pictures Reveal Transient Semantic Activation during Spoken Word Recognition
ERIC Educational Resources Information Center
Yee, Eiling; Sedivy, Julie C.
2006-01-01
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an…
Cross-modal metaphorical mapping of spoken emotion words onto vertical space.
Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando
2015-01-01
From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.
Cross-modal metaphorical mapping of spoken emotion words onto vertical space
Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando
2015-01-01
From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007
Brain-to-text: decoding spoken phrases from phone representations in the brain.
Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja
2015-01-01
It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.
Brain-to-text: decoding spoken phrases from phone representations in the brain
Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja
2015-01-01
It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech. PMID:26124702
SPEECH PERCEPTION AS A TALKER-CONTINGENT PROCESS
Nygaard, Lynne C.; Sommers, Mitchell S.; Pisoni, David B.
2011-01-01
To determine how familiarity with a talker’s voice affects perception of spoken words, we trained two groups of subjects to recognize a set of voices over a 9-day period. One group then identified novel words produced by the same set of talkers at four signal-to-noise ratios. Control subjects identified the same words produced by a different set of talkers. The results showed that the ability to identify a talker’s voice improved intelligibility of novel words produced by that talker. The results suggest that speech perception may involve talker-contingent processes whereby perceptual learning of aspects of the vocal source facilitates the subsequent phonetic analysis of the acoustic signal. PMID:21526138
NASA Astrophysics Data System (ADS)
Pei, Xiaomei; Barbour, Dennis L.; Leuthardt, Eric C.; Schalk, Gerwin
2011-08-01
Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.
Learning and consolidation of new spoken words in autism spectrum disorder.
Henderson, Lisa; Powell, Anna; Gareth Gaskell, M; Norbury, Courtenay
2014-11-01
Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words and/or integrating them with existing knowledge. Nineteen boys with ASD and 19 typically developing (TD) boys matched on age and vocabulary knowledge showed similar improvements in recognition and recall of novel words (e.g. 'biscal') 24 hours after training, suggesting an intact ability to consolidate explicit knowledge of new spoken word forms. TD children showed competition effects for existing neighbors (e.g. 'biscuit') after 24 hours, suggesting that the new words had been integrated with existing knowledge over time. In contrast, children with ASD showed immediate competition effects that were not significant after 24 hours, suggesting a qualitative difference in the time course of lexical integration. These results are considered from the perspective of the dual-memory systems framework. © 2014 John Wiley & Sons Ltd.
Meyer, Ted A; Frisch, Stefan A; Pisoni, David B; Miyamoto, Richard T; Svirsky, Mario A
2003-07-01
Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener's lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener's closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process.
Conducting spoken word recognition research online: Validation and a new timing method.
Slote, Joseph; Strand, Julia F
2016-06-01
Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.
Willis, Suzi; Goldbart, Juliet; Stansfield, Jois
2014-07-01
To compare verbal short-term memory and visual working memory abilities of six children with congenital hearing-impairment identified as having significant language learning difficulties with normative data from typically hearing children using standardized memory assessments. Six children with hearing loss aged 8-15 years were assessed on measures of verbal short-term memory (Non-word and word recall) and visual working memory annually over a two year period. All children had cognitive abilities within normal limits and used spoken language as the primary mode of communication. The language assessment scores at the beginning of the study revealed that all six participants exhibited delays of two years or more on standardized assessments of receptive and expressive vocabulary and spoken language. The children with hearing-impairment scores were significantly higher on the non-word recall task than the "real" word recall task. They also exhibited significantly higher scores on visual working memory than those of the age-matched sample from the standardized memory assessment. Each of the six participants in this study displayed the same pattern of strengths and weaknesses in verbal short-term memory and visual working memory despite their very different chronological ages. The children's poor ability to recall single syllable words in relation to non-words is a clinical indicator of their difficulties in verbal short-term memory. However, the children with hearing-impairment do not display generalized processing difficulties and indeed demonstrate strengths in visual working memory. The poor ability to recall words, in combination with difficulties with early word learning may be indicators of children with hearing-impairment who will struggle to develop spoken language equal to that of their normally hearing peers. This early identification has the potential to allow for target specific intervention that may remediate their difficulties. Copyright © 2014. Published by Elsevier Ireland Ltd.
Strand, Julia F
2014-03-01
A widely agreed-upon feature of spoken word recognition is that multiple lexical candidates in memory are simultaneously activated in parallel when a listener hears a word, and that those candidates compete for recognition (Luce, Goldinger, Auer, & Vitevitch, Perception 62:615-625, 2000; Luce & Pisoni, Ear and Hearing 19:1-36, 1998; McClelland & Elman, Cognitive Psychology 18:1-86, 1986). Because the presence of those competitors influences word recognition, much research has sought to quantify the processes of lexical competition. Metrics that quantify lexical competition continuously are more effective predictors of auditory and visual (lipread) spoken word recognition than are the categorical metrics traditionally used (Feld & Sommers, Speech Communication 53:220-228, 2011; Strand & Sommers, Journal of the Acoustical Society of America 130:1663-1672, 2011). A limitation of the continuous metrics is that they are somewhat computationally cumbersome and require access to existing speech databases. This article describes the Phi-square Lexical Competition Database (Phi-Lex): an online, searchable database that provides access to multiple metrics of auditory and visual (lipread) lexical competition for English words, available at www.juliastrand.com/phi-lex .
Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech
NASA Astrophysics Data System (ADS)
Furui, Sadaoki
This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.
Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M
2017-11-01
Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Repeated imitation makes human vocalizations more word-like.
Edmiston, Pierce; Perlman, Marcus; Lupyan, Gary
2018-03-14
People have long pondered the evolution of language and the origin of words. Here, we investigate how conventional spoken words might emerge from imitations of environmental sounds. Does the repeated imitation of an environmental sound gradually give rise to more word-like forms? In what ways do these forms resemble the original sounds that motivated them (i.e. exhibit iconicity)? Participants played a version of the children's game 'Telephone'. The first generation of participants imitated recognizable environmental sounds (e.g. glass breaking, water splashing). Subsequent generations imitated the previous generation of imitations for a maximum of eight generations. The results showed that the imitations became more stable and word-like, and later imitations were easier to learn as category labels. At the same time, even after eight generations, both spoken imitations and their written transcriptions could be matched above chance to the category of environmental sound that motivated them. These results show how repeated imitation can create progressively more word-like forms while continuing to retain a resemblance to the original sound that motivated them, and speak to the possible role of human vocal imitation in explaining the origins of at least some spoken words. © 2018 The Author(s).
Words Spoken with Insistence: "Wak'as" and the Limits of the Bolivian Multi-Institutional Democracy
ERIC Educational Resources Information Center
Cuelenaere, Laurence Janine
2009-01-01
Building on 18 months of fieldwork in the Bolivian highlands, this dissertation examines how traversing landscapes, through the mediation of spatial practices and spoken words, are embedded in systems of belief. By focusing on "wak'as" (i.e. sacred objects) and on how the inhabitants of the Altiplano relate to the Andean deities known as…
Neurophysiology of Speech Differences in Childhood Apraxia of Speech
Preston, Jonathan L.; Molfese, Peter J.; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia; Landi, Nicole
2014-01-01
Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes. PMID:25090016
Roman, Adrienne S; Pisoni, David B; Kronenberger, William G; Faulkner, Kathleen F
Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences. First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.
Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.
2016-01-01
Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of auditory attention and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally-degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally-degraded speech reflects early peripheral auditory processes as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that auditory attention and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, since they are routinely required to encode, process and understand spectrally-degraded acoustic signals. PMID:28045787
Processing Lexical and Speaker Information in Repetition and Semantic/Associative Priming
ERIC Educational Resources Information Center
Lee, Chao-Yang; Zhang, Yu
2018-01-01
The purpose of this study is to investigate the interaction between processing lexical and speaker-specific information in spoken word recognition. The specific question is whether repetition and semantic/associative priming is reduced when the prime and target are produced by different speakers. In Experiment 1, the prime and target were repeated…
Aging and Cortical Mechanisms of Speech Perception in Noise
ERIC Educational Resources Information Center
Wong, Patrick C. M.; Jin, James Xumin; Gunasekera, Geshri M.; Abel, Rebekah; Lee, Edward R.; Dhar, Sumitrajit
2009-01-01
Spoken language processing in noisy environments, a hallmark of the human brain, is subject to age-related decline, even when peripheral hearing might be intact. The present study examines the cortical cerebral hemodynamics (measured by fMRI) associated with such processing in the aging brain. Younger and older subjects identified single words in…
Dissociating verbal and nonverbal audiovisual object processing.
Hocking, Julia; Price, Cathy J
2009-02-01
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
Infant Directed Speech Enhances Statistical Learning in Newborn Infants: An ERP Study
Teinonen, Tuomas; Tervaniemi, Mari; Huotilainen, Minna
2016-01-01
Statistical learning and the social contexts of language addressed to infants are hypothesized to play important roles in early language development. Previous behavioral work has found that the exaggerated prosodic contours of infant-directed speech (IDS) facilitate statistical learning in 8-month-old infants. Here we examined the neural processes involved in on-line statistical learning and investigated whether the use of IDS facilitates statistical learning in sleeping newborns. Event-related potentials (ERPs) were recorded while newborns were exposed to12 pseudo-words, six spoken with exaggerated pitch contours of IDS and six spoken without exaggerated pitch contours (ADS) in ten alternating blocks. We examined whether ERP amplitudes for syllable position within a pseudo-word (word-initial vs. word-medial vs. word-final, indicating statistical word learning) and speech register (ADS vs. IDS) would interact. The ADS and IDS registers elicited similar ERP patterns for syllable position in an early 0–100 ms component but elicited different ERP effects in both the polarity and topographical distribution at 200–400 ms and 450–650 ms. These results provide the first evidence that the exaggerated pitch contours of IDS result in differences in brain activity linked to on-line statistical learning in sleeping newborns. PMID:27617967
Talker familiarity and spoken word recognition in school-age children*
Levi, Susannah V.
2014-01-01
Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers’ voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German–English bilingual talkers and were tested on the speech of six bilinguals, three of whom were familiar. Results revealed that children do show improved spoken language processing when they are familiar with the talkers, but this improvement was limited to highly familiar lexical items. This restriction of the familiar talker advantage is attributed to differences in the representation of highly familiar and less familiar lexical items. In addition, children did not exhibit accent-general learning; despite having been exposed to German-accented talkers during training, there was no improvement for novel German-accented talkers. PMID:25159173
ERIC Educational Resources Information Center
Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie
2017-01-01
Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…
ERIC Educational Resources Information Center
Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.
2016-01-01
The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…
Schiff, Rachel; Saiegh-Haddad, Elinor
2018-01-01
This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children's phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children's early morphological awareness in SpA explained variance in children's gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.
Schiff, Rachel; Saiegh-Haddad, Elinor
2018-01-01
This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633
Language as a multimodal phenomenon: implications for language learning, processing and evolution
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
2014-01-01
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. PMID:25092660
More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing.
Filippi, Piera; Ocklenburg, Sebastian; Bowling, Daniel L; Heege, Larissa; Güntürkün, Onur; Newen, Albert; de Boer, Bart
2017-08-01
Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of "happy" and "sad" were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of "happy" and "sad" were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
Feature Statistics Modulate the Activation of Meaning During Spoken Word Processing.
Devereux, Barry J; Taylor, Kirsten I; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K
2016-03-01
Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co-occurrence (correlational strength)--determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech-to-meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time-sensitive co-occurrence-driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general-to-specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation. Copyright © 2015 The Authors. Cognitive Science published by Cognitive Science Society, Inc.
Pupillary Responses to Words That Convey a Sense of Brightness or Darkness
Mathôt, Sebastiaan; Grainger, Jonathan; Strijkers, Kristof
2017-01-01
Theories about embodiment of language hold that when you process a word’s meaning, you automatically simulate associated sensory input (e.g., perception of brightness when you process lamp) and prepare associated actions (e.g., finger movements when you process typing). To test this latter prediction, we measured pupillary responses to single words that conveyed a sense of brightness (e.g., day) or darkness (e.g., night) or were neutral (e.g., house). We found that pupils were largest for words conveying darkness, of intermediate size for neutral words, and smallest for words conveying brightness. This pattern was found for both visually presented and spoken words, which suggests that it was due to the words’ meanings, rather than to visual or auditory properties of the stimuli. Our findings suggest that word meaning is sufficient to trigger a pupillary response, even when this response is not imposed by the experimental task, and even when this response is beyond voluntary control. PMID:28613135
Language-Mediated Visual Orienting Behavior in Low and High Literates
Huettig, Falk; Singh, Niharika; Mishra, Ramesh Kumar
2011-01-01
The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts. PMID:22059083
Proform-Antecedent Linking in Listeners with Language Impairments and Unimpaired Listeners
ERIC Educational Resources Information Center
Engel, Samantha Michelle
2016-01-01
This dissertation explores how listeners extract meaning from personal and reflexive pronouns in spoken language. To be understood, words like her and herself must be linked to a prior element in the speech stream (or antecedent). This process draws on syntactic knowledge and verbal working memory processes. I present two original research studies…
Lexical Competition Effects in Aphasia: Deactivation of Lexical Candidates in Spoken Word Processing
ERIC Educational Resources Information Center
Janse, Esther
2006-01-01
Research has shown that Broca's and Wernicke's aphasic patients show different impairments in auditory lexical processing. The results of an experiment with form-overlapping primes showed an inhibitory effect of form-overlap for control adults and a weak inhibition trend for Broca's aphasic patients, but a facilitatory effect of form-overlap was…
ERIC Educational Resources Information Center
Montrul, Silvina; Davidson, Justin; De La Fuente, Israel; Foote, Rebecca
2014-01-01
We examined how age of acquisition in Spanish heritage speakers and L2 learners interacts with implicitness vs. explicitness of tasks in gender processing of canonical and non-canonical ending nouns. Twenty-three Spanish native speakers, 29 heritage speakers, and 33 proficiency-matched L2 learners completed three on-line spoken word recognition…
Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition
Lupyan, Gary
2015-01-01
Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most “arbitrary” spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages. PMID:26340349
Effects of stress typicality during speeded grammatical classification.
Arciuli, Joanne; Cupples, Linda
2003-01-01
The experiments reported here were designed to investigate the influence of stress typicality during speeded grammatical classification of disyllabic English words by native and non-native speakers. Trochaic nouns and iambic gram verbs were considered to be typically stressed, whereas iambic nouns and trochaic verbs were considered to be atypically stressed. Experiments 1a and 2a showed that while native speakers classified typically stressed words individual more quickly and more accurately than atypically stressed words during differences reading, there were no overall effects during classification of spoken stimuli. However, a subgroup of native speakers with high error rates did show a significant effect during classification of spoken stimuli. Experiments 1b and 2b showed that non-native speakers classified typically stressed words more quickly and more accurately than atypically stressed words during reading. Typically stressed words were classified more accurately than atypically stressed words when the stimuli were spoken. Importantly, there was a significant relationship between error rates, vocabulary size and the size of the stress typicality effect in each experiment. We conclude that participants use information about lexical stress to help them distinguish between disyllabic nouns and verbs during speeded grammatical classification. This is especially so for individuals with a limited vocabulary who lack other knowledge (e.g., semantic knowledge) about the differences between these grammatical categories.
Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.
de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo
2016-10-01
Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Role of Grammatical Category Information in Spoken Word Retrieval
Duràn, Carolina Palma; Pillon, Agnesa
2011-01-01
We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production. PMID:22110465
Insights into failed lexical retrieval from network science.
Vitevitch, Michael S; Chan, Kit Ying; Goldstein, Rutherford
2014-02-01
Previous network analyses of the phonological lexicon (Vitevitch, 2008) observed a web-like structure that exhibited assortative mixing by degree: words with dense phonological neighborhoods tend to have as neighbors words that also have dense phonological neighborhoods, and words with sparse phonological neighborhoods tend to have as neighbors words that also have sparse phonological neighborhoods. Given the role that assortative mixing by degree plays in network resilience, we examined instances of real and simulated lexical retrieval failures in computer simulations, analysis of a slips-of-the-ear corpus, and three psycholinguistic experiments for evidence of this network characteristic in human behavior. The results of the various analyses support the hypothesis that the structure of words in the mental lexicon influences lexical processing. The implications of network science for current models of spoken word recognition, language processing, and cognitive psychology more generally are discussed. Copyright © 2013 Elsevier Inc. All rights reserved.
Insights into failed lexical retrieval from network science
Vitevitch, Michael S.; Chan, Kit Ying; Goldstein, Rutherford
2013-01-01
Previous network analyses of the phonological lexicon (Vitevitch, 2008) observed a web-like structure that exhibited assortative mixing by degree: words with dense phonological neighborhoods tend to have as neighbors words that also have dense phonological neighborhoods, and words with sparse phonological neighborhoods tend to have as neighbors words that also have sparse phonological neighborhoods. Given the role that assortative mixing by degree plays in network resilience, we examined instances of real and simulated lexical retrieval failures in computer simulations, analysis of a slips-of-the-ear corpus, and three psycholinguistic experiments for evidence of this network characteristic in human behavior. The results of the various analyses support the hypothesis that the structure of words in the mental lexicon influences lexical processing. The implications of network science for current models of spoken word recognition, language processing, and cognitive psychology more generally are discussed. PMID:24269488
The serial order of response units in word production: The case of typing.
Scaltritti, Michele; Longcamp, Marieke; Alario, F-Xavier
2018-05-01
The selection and ordering of response units (phonemes, letters, keystrokes) represents a transversal issue across different modalities of language production. Here, the issue of serial order was investigated with respect to typewriting. Following seminal investigations in the spoken modality, we conducted an experiment where participants typed as many times as possible a pair of words during a fixed time-window. The 2 words shared either their first 2 keystrokes, the last 2 ones, all the keystrokes, or were unrelated. Fine-grained performance measures were recorded at the level of individual keystrokes. In contrast with previous results from the spoken modality, we observed an overall facilitation for words sharing the initial keystrokes. In addition, the initial overlap briefly delayed the execution of the following keystroke. The results are discussed with reference to different theoretical perspectives on serial order, with a particular attention to the competing accounts offered by position coding models and chaining models. Our findings point to potential major differences between the speaking and typing modalities in terms of interactive activation between lexical and response units processing levels. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Lexical and sublexical units in speech perception.
Giroux, Ibrahima; Rey, Arnaud
2009-03-01
Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes. Copyright © 2009, Cognitive Science Society, Inc.
Meyer, Ted A.; Frisch, Stefan A.; Pisoni, David B.; Miyamoto, Richard T.; Svirsky, Mario A.
2012-01-01
Hypotheses Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? Background The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener’s lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener’s closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Methods Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. Results The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. Conclusion The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process. PMID:12851554
Hakuno, Yoko; Omori, Takahide; Yamamoto, Jun-Ichi; Minagawa, Yasuyo
2017-08-01
In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word-object mapping remains elusive. We tested whether infants aged 5-6 months and 9-10 months could segment a word from continuous speech and acquire a word-object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word-object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants. Copyright © 2017 Elsevier Inc. All rights reserved.
47 CFR 80.314 - Distress communications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... radiotelephone distress call consists of: (1) The distress signal MAYDAY spoken three times; (2) The words THIS IS; (3) The call sign (or name, if no call sign assigned) of the mobile station in distress, spoken...
Miles, James D; Proctor, Robert W
2009-10-01
In the current study, we show that the non-intentional processing of visually presented words and symbols can be attenuated by sounds. Importantly, this attenuation is dependent on the similarity in categorical domain between the sounds and words or symbols. Participants performed a task in which left or right responses were made contingent on the color of a centrally presented target that was either a location word (LEFT or RIGHT) or a left or right arrow. Responses were faster when they were on the side congruent with the word or arrow. This bias was reduced for location words by a neutral spoken word and for arrows by a tone series, but not vice versa. We suggest that words and symbols are processed with minimal attentional requirements until they are categorized into specific knowledge domains, but then become sensitive to other information within the same domain regardless of the similarity between modalities.
Neuronal Spoken Word Recognition: The Time Course of Processing Variation in the Speech Signal
ERIC Educational Resources Information Center
Schild, Ulrike; Roder, Brigitte; Friedrich, Claudia K.
2012-01-01
Recent neurobiological studies revealed evidence for lexical representations that are not specified for the coronal place of articulation (PLACE; Friedrich, Eulitz, & Lahiri, 2006; Friedrich, Lahiri, & Eulitz, 2008). Here we tested when these types of underspecified representations influence neuronal speech recognition. In a unimodal…
Orthography Influences the Perception and Production of Speech
ERIC Educational Resources Information Center
Rastle, Kathleen; McCormick, Samantha F.; Bayliss, Linda; Davis, Colin J.
2011-01-01
One intriguing question in language research concerns the extent to which orthographic information impacts on spoken word processing. Previous research has faced a number of methodological difficulties and has not reached a definitive conclusion. Our research addresses these difficulties by capitalizing on recent developments in the area of word…
Recognizing Speech under a Processing Load: Dissociating Energetic from Informational Factors
ERIC Educational Resources Information Center
Mattys, Sven L.; Brooks, Joanna; Cooke, Martin
2009-01-01
Effects of perceptual and cognitive loads on spoken-word recognition have so far largely escaped investigation. This study lays the foundations of a psycholinguistic approach to speech recognition in adverse conditions that draws upon the distinction between energetic masking, i.e., listening environments leading to signal degradation, and…
Phonological Stereotypes and Names in Temne.
ERIC Educational Resources Information Center
Nemer, Julie F.
1987-01-01
Many personal names in Temne (a Mel language spoken in Sierra Leone) are borrowed from other languages, containing foreign sounds and sequences which are unpronounceable for Temne speakers when they appear in other words. These exceptions are treated as instances of phonological stereotyping (cases remaining resistant to assimilation processes).…
Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)
NASA Astrophysics Data System (ADS)
Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto
An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.
How should a speech recognizer work?
Scharenborg, Odette; Norris, Dennis; Bosch, Louis; McQueen, James M
2005-11-12
Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input. 2005 Lawrence Erlbaum Associates, Inc.
Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina
2017-11-22
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. Copyright © 2017 the authors 0270-6474/17/3711495-10$15.00/0.
Kanjlia, Shipra; Merabet, Lotfi B.
2017-01-01
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the “VWFA” is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. PMID:29061700
ERIC Educational Resources Information Center
Preston, Jonathan L.; Felsenfeld, Susan; Frost, Stephen J.; Mencl, W. Einar; Fulbright, Robert K.; Grigorenko, Elena L.; Landi, Nicole; Seki, Ayumi; Pugh, Kenneth R.
2012-01-01
Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8;6[years;months] through 10;10, with 17 matched controls. Results: When…
When do combinatorial mechanisms apply in the production of inflected words?
Cholin, Joana; Rapp, Brenda; Miozzo, Michele
2010-01-01
A central question for theories of inflected word processing is to determine under what circumstances compositional procedures apply. Some accounts (e.g., the dual-mechanism model; Clahsen, 1999 ) propose that compositional processes only apply to verbs that take productive affixes. For all other verbs, inflected forms are assumed to be stored in the lexicon in a nondecomposed manner. This account makes clear predictions about the consequences of disruption to the lexical access mechanisms involved in the spoken production of inflected forms. Briefly, it predicts that nonproductive forms (which require lexical access) should be more affected than productive forms (which, depending on the language task, may not). We tested these predictions through the detailed analysis of the spoken production of a German-speaking individual with an acquired lexical impairment resulting from a stroke. Analyses of response accuracy, error types, and frequency effects revealed that combinatorial processes are not restricted to verbs that take productive inflections. On this basis, we propose an alternative account, the stem-based assembly model (SAM), which posits that combinatorial processes may be available to all stems and not only to those that combine with productive affixes.
When do combinatorial mechanisms apply in the production of inflected words?
Cholin, Joana; Rapp, Brenda; Miozzo, Michele
2010-01-01
A central question for theories of inflected word processing is to determine under what circumstances compositional procedures apply. Some accounts (e.g., the Dual Mechanism Model; Clahsen, 1999) propose that compositional processes only apply to verbs that take productive affixes. For all other verbs, inflected forms are assumed to be stored in the lexicon in a non-decomposed manner. This account makes clear predictions about the consequences of disruption to the lexical access mechanisms involved in the spoken production of inflected forms. Briefly, it predicts that non-productive forms (which require lexical access) should be more affected than productive forms (which, depending on the language task, may not). We tested these predictions through the detailed analysis of the spoken production of a German-speaking individual with an acquired lexical impairment resulting from a stroke. Analyses of response accuracy, error types, and frequency effects revealed that combinatorial processes are not restricted to verbs that take productive inflections. On this basis, we propose an alternative account, the Stem-based Assembly Model (SAM) that posits that combinatorial processes may be available to all stems, and not only those that combine with productive affixes. PMID:21104479
Hope, Thomas M H; Leff, Alex P; Prejawa, Susan; Bruce, Rachel; Haigh, Zula; Lim, Louise; Ramsden, Sue; Oberhuber, Marion; Ludersdorfer, Philipp; Crinion, Jenny; Seghier, Mohamed L; Price, Cathy J
2017-06-01
Stroke survivors with acquired language deficits are commonly thought to reach a 'plateau' within a year of stroke onset, after which their residual language skills will remain stable. Nevertheless, there have been reports of patients who appear to recover over years. Here, we analysed longitudinal change in 28 left-hemisphere stroke patients, each more than a year post-stroke when first assessed-testing each patient's spoken object naming skills and acquiring structural brain scans twice. Some of the patients appeared to improve over time while others declined; both directions of change were associated with, and predictable given, structural adaptation in the intact right hemisphere of the brain. Contrary to the prevailing view that these patients' language skills are stable, these results imply that real change continues over years. The strongest brain-behaviour associations (the 'peak clusters') were in the anterior temporal lobe and the precentral gyrus. Using functional magnetic resonance imaging, we confirmed that both regions are actively involved when neurologically normal control subjects name visually presented objects, but neither appeared to be involved when the same participants used a finger press to make semantic association decisions on the same stimuli. This suggests that these regions serve word-retrieval or articulatory functions in the undamaged brain. We teased these interpretations apart by reference to change in other tasks. Consistent with the claim that the real change is occurring here, change in spoken object naming was correlated with change in two other similar tasks, spoken action naming and written object naming, each of which was independently associated with structural adaptation in similar (overlapping) right hemisphere regions. Change in written object naming, which requires word-retrieval but not articulation, was also significantly more correlated with both (i) change in spoken object naming; and (ii) structural adaptation in the two peak clusters, than was change in another task-auditory word repetition-which requires articulation but not word retrieval. This suggests that the changes in spoken object naming reflected variation at the level of word-retrieval processes. Surprisingly, given their qualitatively similar activation profiles, hypertrophy in the anterior temporal region was associated with improving behaviour, while hypertrophy in the precentral gyrus was associated with declining behaviour. We predict that either or both of these regions might be fruitful targets for neural stimulation studies (suppressing the precentral region and/or enhancing the anterior temporal region), aiming to encourage recovery or arrest decline even years after stroke occurs. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain.
Leff, Alex P.; Prejawa, Susan; Bruce, Rachel; Haigh, Zula; Lim, Louise; Ramsden, Sue; Oberhuber, Marion; Ludersdorfer, Philipp; Crinion, Jenny; Seghier, Mohamed L.; Price, Cathy J.
2017-01-01
Abstract Stroke survivors with acquired language deficits are commonly thought to reach a ‘plateau’ within a year of stroke onset, after which their residual language skills will remain stable. Nevertheless, there have been reports of patients who appear to recover over years. Here, we analysed longitudinal change in 28 left-hemisphere stroke patients, each more than a year post-stroke when first assessed—testing each patient’s spoken object naming skills and acquiring structural brain scans twice. Some of the patients appeared to improve over time while others declined; both directions of change were associated with, and predictable given, structural adaptation in the intact right hemisphere of the brain. Contrary to the prevailing view that these patients’ language skills are stable, these results imply that real change continues over years. The strongest brain–behaviour associations (the ‘peak clusters’) were in the anterior temporal lobe and the precentral gyrus. Using functional magnetic resonance imaging, we confirmed that both regions are actively involved when neurologically normal control subjects name visually presented objects, but neither appeared to be involved when the same participants used a finger press to make semantic association decisions on the same stimuli. This suggests that these regions serve word-retrieval or articulatory functions in the undamaged brain. We teased these interpretations apart by reference to change in other tasks. Consistent with the claim that the real change is occurring here, change in spoken object naming was correlated with change in two other similar tasks, spoken action naming and written object naming, each of which was independently associated with structural adaptation in similar (overlapping) right hemisphere regions. Change in written object naming, which requires word-retrieval but not articulation, was also significantly more correlated with both (i) change in spoken object naming; and (ii) structural adaptation in the two peak clusters, than was change in another task—auditory word repetition—which requires articulation but not word retrieval. This suggests that the changes in spoken object naming reflected variation at the level of word-retrieval processes. Surprisingly, given their qualitatively similar activation profiles, hypertrophy in the anterior temporal region was associated with improving behaviour, while hypertrophy in the precentral gyrus was associated with declining behaviour. We predict that either or both of these regions might be fruitful targets for neural stimulation studies (suppressing the precentral region and/or enhancing the anterior temporal region), aiming to encourage recovery or arrest decline even years after stroke occurs. PMID:28444235
Lexical access in sign language: a computational model.
Caselli, Naomi K; Cohen-Goldberg, Ariel M
2014-01-01
PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Auer, Edward T.; Bernstein, Lynne E.
2009-01-01
Purpose Sensitivity of subjective estimates of Age of Acquisition (AOA) and Acquisition Channel (AC) (printed, spoken, signed) to differences in word exposure within and between populations that differ dramatically in perceptual experience was examined. Methods 50 participants with early-onset deafness and 50 with normal hearing rated 175 words in terms of subjective age-of-acquisition and acquisition channel. Additional data were collected using a standardized test of reading and vocabulary. Results Deaf participants rated words as learned later (M = 10 years) than did participants with normal hearing (M = 8.5 years) (F(1,99) = 28.59; p < .01). Group-averaged item ratings of AOA were highly correlated across the groups (r = .971), and with normative order of acquisition (deaf: r = .950 and hearing: r = .946). The groups differed in their ratings of acquisition channel: Hearing: printed = 30%, spoken = 70%, signed =0%; Deaf: printed = 45%, spoken = 38%, signed = 17%. Conclusions Subjective AOA and AC measures are sensitive to between- and within-group differences in word experience. The results demonstrate that these subjective measures can be applied as reliable proxies for direct measures of lexical development in studies of lexical knowledge in adults with prelingual onset deafness. PMID:18506048
Attentional Capture of Objects Referred to by Spoken Language
ERIC Educational Resources Information Center
Salverda, Anne Pier; Altmann, Gerry T. M.
2011-01-01
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
Caselli, Naomi K; Pyers, Jennie E
2017-07-01
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Finding Words in a Language that Allows Words without Vowels
ERIC Educational Resources Information Center
El Aissati, Abder; McQueen, James M.; Cutler, Anne
2012-01-01
Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring "win" in "twin" because "t" cannot be a word). However, the constraint would be counter-productive in…
Language as a multimodal phenomenon: implications for language learning, processing and evolution.
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
2014-09-19
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
On the Conventionalization of Mouth Actions in Australian Sign Language.
Johnston, Trevor; van Roekel, Jane; Schembri, Adam
2016-03-01
This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.
Presentation format effects in a levels-of-processing task.
Foos, Paul W; Goolkasian, Paula
2008-01-01
Three experiments were conducted to examine better performance in long-term memory when stimulus items are pictures or spoken words compared to printed words. Hypotheses regarding the allocation of attention to printed words, the semantic link between pictures and processing, and a rich long-term representation for pictures were tested. Using levels-of-processing tasks eliminated format effects when no memory test was expected and processing was deep (El), and when study and test formats did not match (E3). Pictures produced superior performance when a memory test was expected (El & 2) and when study and test formats were the same (E3). Results of all experiments support the attenuation of attention model and that picture superiority is due to a more direct access to semantic processing and a richer visual code. General principles to guide the processing of stimulus information are discussed.
Age differences and format effects in working memory.
Foos, Paul W; Goolkasian, Paula
2010-07-01
Format effects refer to lower recall of printed words from working memory when compared to spoken words or pictures. These effects have been attributed to an attenuation of attention to printed words. The present experiment compares younger and older adults' recall of three or six items presented as pictures, spoken words, printed words, and alternating case WoRdS. The latter stimuli have been shown to increase attention to printed words and, thus, reduce format effects. The question of interest was whether these stimuli would also reduce format effects for older adults whose working memory capacity has fewer attentional resources to allocate. Results showed that older adults performed as well as younger adults with three items but less well with six and that format effects were reduced for both age groups, but more for young, when alternating case words were used. Other findings regarding executive control of working memory are discussed. The obtained differences support models of reduced capacity in older adult working memory.
Using Speech Recall in Hearing Aid Fitting and Outcome Evaluation Under Ecological Test Conditions.
Lunner, Thomas; Rudner, Mary; Rosenbom, Tove; Ågren, Jessica; Ng, Elaine Hoi Ning
2016-01-01
In adaptive Speech Reception Threshold (SRT) tests used in the audiological clinic, speech is presented at signal to noise ratios (SNRs) that are lower than those generally encountered in real-life communication situations. At higher, ecologically valid SNRs, however, SRTs are insensitive to changes in hearing aid signal processing that may be of benefit to listeners who are hard of hearing. Previous studies conducted in Swedish using the Sentence-final Word Identification and Recall test (SWIR) have indicated that at such SNRs, the ability to recall spoken words may be a more informative measure. In the present study, a Danish version of SWIR, known as the Sentence-final Word Identification and Recall Test in a New Language (SWIRL) was introduced and evaluated in two experiments. The objective of experiment 1 was to determine if the Swedish results demonstrating benefit from noise reduction signal processing for hearing aid wearers could be replicated in 25 Danish participants with mild to moderate symmetrical sensorineural hearing loss. The objective of experiment 2 was to compare direct-drive and skin-drive transmission in 16 Danish users of bone-anchored hearing aids with conductive hearing loss or mixed sensorineural and conductive hearing loss. In experiment 1, performance on SWIRL improved when hearing aid noise reduction was used, replicating the Swedish results and generalizing them across languages. In experiment 2, performance on SWIRL was better for direct-drive compared with skin-drive transmission conditions. These findings indicate that spoken word recall can be used to identify benefits from hearing aid signal processing at ecologically valid, positive SNRs where SRTs are insensitive.
Micro-Based Speech Recognition: Instructional Innovation for Handicapped Learners.
ERIC Educational Resources Information Center
Horn, Carin E.; Scott, Brian L.
A new voice based learning system (VBLS), which allows the handicapped user to interact with a microcomputer by voice commands, is described. Speech or voice recognition is the computerized process of identifying a spoken word or phrase, including those resulting from speech impediments. This new technology is helpful to the severely physically…
ERIC Educational Resources Information Center
Macizo, Pedro; Van Petten, Cyma; O'Rourke, Polly L.
2012-01-01
Many multisyllabic words contain shorter words that are not semantic units, like the CAP in HANDICAP and the DURA ("hard") in VERDURA ("vegetable"). The spaces between printed words identify word boundaries, but spurious identification of these embedded words is a potentially greater challenge for spoken language comprehension, a challenge that is…
When Half a Word Is Enough: Infants Can Recognize Spoken Words Using Partial Phonetic Information.
ERIC Educational Resources Information Center
Fernald, Anne; Swingley, Daniel; Pinto, John P.
2001-01-01
Two experiments tracked infants' eye movements to examine use of word-initial information to understand fluent speech. Results indicated that 21- and 18-month-olds recognized partial words as quickly and reliably as whole words. Infants' productive vocabulary and reaction time were related to word recognition accuracy. Results show that…
Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L
2018-01-01
Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.
The Influence of the Phonological Neighborhood Clustering Coefficient on Spoken Word Recognition
ERIC Educational Resources Information Center
Chan, Kit Ying; Vitevitch, Michael S.
2009-01-01
Clustering coefficient--a measure derived from the new science of networks--refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words "bat", "hat", and "can", all of which are neighbors of the word "cat"; the words "bat" and…
Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals
ERIC Educational Resources Information Center
Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.
2017-01-01
Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…
An auditory analog of the picture superiority effect.
Crutcher, Robert J; Beer, Jenay M
2011-01-01
Previous research has found that pictures (e.g., a picture of an elephant) are remembered better than words (e.g., the word "elephant"), an empirical finding called the picture superiority effect (Paivio & Csapo. Cognitive Psychology 5(2):176-206, 1973). However, very little research has investigated such memory differences for other types of sensory stimuli (e.g. sounds or odors) and their verbal labels. Four experiments compared recall of environmental sounds (e.g., ringing) and spoken verbal labels of those sounds (e.g., "ringing"). In contrast to earlier studies that have shown no difference in recall of sounds and spoken verbal labels (Philipchalk & Rowe. Journal of Experimental Psychology 91(2):341-343, 1971; Paivio, Philipchalk, & Rowe. Memory & Cognition 3(6):586-590, 1975), the experiments reported here yielded clear evidence for an auditory analog of the picture superiority effect. Experiments 1 and 2 showed that sounds were recalled better than the verbal labels of those sounds. Experiment 2 also showed that verbal labels are recalled as well as sounds when participants imagine the sound that the word labels. Experiments 3 and 4 extended these findings to incidental-processing task paradigms and showed that the advantage of sounds over words is enhanced when participants are induced to label the sounds.
Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence.
Schomers, Malte R; Kirilina, Evgeniya; Weigand, Anne; Bajbouj, Malek; Pulvermüller, Friedemann
2015-10-01
Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., "pool" or "tool"). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed "tool" relative to "pool" responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition. © The Author 2014. Published by Oxford University Press.
Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence
Schomers, Malte R.; Kirilina, Evgeniya; Weigand, Anne; Bajbouj, Malek; Pulvermüller, Friedemann
2015-01-01
Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., “pool” or “tool”). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed “tool” relative to “pool” responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition. PMID:25452575
Spoken Language and Mathematics.
ERIC Educational Resources Information Center
Raiker, Andrea
2002-01-01
States teachers/learners use spoken language in a three part mathematics lesson advocated by the British National Numeracy Strategy. Recognizes language's importance by emphasizing correct use of mathematical vocabulary in raising standards. Finds pupils and teachers appear to ascribe different meanings to scientific words because of their…
Segmentation of Written Words in French
ERIC Educational Resources Information Center
Chetail, Fabienne; Content, Alain
2013-01-01
Syllabification of spoken words has been largely used to define syllabic properties of written words, such as the number of syllables or syllabic boundaries. By contrast, some authors proposed that the functional structure of written words stems from visuo-orthographic features rather than from the transposition of phonological structure into the…
ERIC Educational Resources Information Center
Zhang, Qingfang; Chen, Hsuan-Chih; Weekes, Brendan Stuart; Yang, Yufang
2009-01-01
A picture-word interference paradigm with visually presented distractors was used to investigate the independent effects of orthographic and phonological facilitation on Mandarin monosyllabic word production. Both the stimulus-onset asynchrony (SOA) and the picture-word relationship along different lexical dimensions were varied. We observed a…
The Perception of Assimilation in Newly Learned Novel Words
ERIC Educational Resources Information Center
Snoeren, Natalie D.; Gaskell, M. Gareth; Di Betta, Anna Maria
2009-01-01
The present study investigated the mechanisms underlying perceptual compensation for assimilation in novel words. During training, participants learned canonical versions of novel spoken words (e.g., "decibot") presented in isolation. Following exposure to a second set of novel words the next day, participants carried out a phoneme…
Kouider, Sid; Dupoux, Emmanuel
2005-08-01
We present a novel subliminal priming technique that operates in the auditory modality. Masking is achieved by hiding a spoken word within a stream of time-compressed speechlike sounds with similar spectral characteristics. Participants were unable to consciously identify the hidden words, yet reliable repetition priming was found. This effect was unaffected by a change in the speaker's voice and remained restricted to lexical processing. The results show that the speech modality, like the written modality, involves the automatic extraction of abstract word-form representations that do not include nonlinguistic details. In both cases, priming operates at the level of discrete and abstract lexical entries and is little influenced by overlap in form or semantics.
Li, Chuchu; Wang, Min
2017-08-01
Three sets of experiments using the picture naming tasks with the form preparation paradigm investigated the influence of orthographic experience on the development of phonological preparation unit in spoken word production in native Mandarin-speaking children. Participants included kindergarten children who have not received formal literacy instruction, Grade 1 children who are comparatively more exposed to the alphabetic pinyin system and have very limited Chinese character knowledge, Grades 2 and 4 children who have better character knowledge and more exposure to characters, and skilled adult readers who have the most advanced character knowledge and most exposure to characters. Only Grade 1 children showed the form preparation effect in the same initial consonant condition (i.e., when a list of target words shared the initial consonant). Both Grade 4 children and adults showed the preparation effect when the initial syllable (but not tone) among target words was shared. Kindergartners and Grade 2 children only showed the preparation effect when the initial syllable including tonal information was shared. These developmental changes in phonological preparation could be interpreted as a joint function of the modification of phonological representation and attentional shift. Extensive pinyin experience encourages speakers to attend to and select onset phoneme in phonological preparation, whereas extensive character experience encourages speakers to prepare spoken words in syllables.
Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen
2016-02-01
To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged < 4.5 years, hearing loss of ≥ 30 dB, between 500 and 4000 Hz, using at least one amplification device with adequate amplification (hearing aid, cochlear implant, osseo-integrated device). Behavioral. Caregiver-directed educational intervention curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.
Carr, Deborah; Felce, Janet
2007-04-01
The context for this work was an evaluation study [Carr, D., & Felce, J. A. (in press)] of the early phases of the Picture Exchange Communication System (PECS) [Frost, L. A., & Bondy, A. S. (1994). The picture exchange communication system training manual. Cherry Hill, NJ: Pyramid Educational Consultants, Inc.; Frost, L. A., & Bondy, A. S. (2004). The picture exchange communication system training manual, 2nd edn. Newark, DE: Pyramid Educational Consultants, Inc.]. This paper reports that five of 24 children who received 15 h of PECS teaching towards Phase III over a period of 4-5 weeks, showed concomitant increases in speech production, either in initiating communication with staff or in responding, or both. No children in the PECS group demonstrated a decrease in spoken words after receiving PECS teaching. In the control group, only one of 17 children demonstrated a minimal increase and four of 17 children demonstrated a decrease in use of spoken words after a similar period without PECS teaching.
When canary primes yellow: effects of semantic memory on overt attention.
Léger, Laure; Chauvet, Elodie
2015-02-01
This study explored how overt attention is influenced by the colour that is primed when a target word is read during a lexical visual search task. Prior studies have shown that attention can be influenced by conceptual or perceptual overlap between a target word and distractor pictures: attention is attracted to pictures that have the same form (rope--snake) or colour (green--frog) as the spoken target word or is drawn to an object from the same category as the spoken target word (trumpet--piano). The hypothesis for this study was that attention should be attracted to words displayed in the colour that is primed by reading a target word (for example, yellow for canary). An experiment was conducted in which participants' eye movements were recorded whilst they completed a lexical visual search task. The primary finding was that participants' eye movements were mainly directed towards words displayed in the colour primed by reading the target word, even though this colour was not relevant to completing the visual search task. This result is discussed in terms of top-down guidance of overt attention in visual search for words.
Vitevitch, Michael S.
2008-01-01
A comparison of the lexical characteristics of 88 auditory misperceptions (i.e., slips of the ear) showed no difference in word-frequency, neighborhood density, and neighborhood frequency between the actual and the perceived utterances. Another comparison of slip of the ear tokens (i.e., actual and perceived utterances) and words in general (i.e., randomly selected from the lexicon) showed that slip of the ear tokens had denser neighborhoods and higher neighborhood frequency than words in general, as predicted from laboratory studies. Contrary to prediction, slip of the ear tokens were higher in frequency of occurrence than words in general. Additional laboratory-based investigations examined the possible source of the contradictory word frequency finding, highlighting the importance of using naturalistic and experimental data to develop models of spoken language processing. PMID:12866911
The impact of phonetic dissimilarity on the perception of foreign accented speech
NASA Astrophysics Data System (ADS)
Weil, Shawn A.
2003-10-01
Non-normative speech (i.e., synthetic speech, pathological speech, foreign accented speech) is more difficult to process for native listeners than is normative speech. Does perceptual dissimilarity affect only intelligibility, or are there other costs to processing? The current series of experiments investigates both the intelligibility and time course of foreign accented speech (FAS) perception. Native English listeners heard single English words spoken by both native English speakers and non-native speakers (Mandarin or Russian). Words were chosen based on the similarity between the phonetic inventories of the respective languages. Three experimental designs were used: a cross-modal matching task, a word repetition (shadowing) task, and two subjective ratings tasks which measured impressions of accentedness and effortfulness. The results replicate previous investigations that have found that FAS significantly lowers word intelligibility. Furthermore, in FAS as well as perceptual effort, in the word repetition task, correct responses are slower to accented words than to nonaccented words. An analysis indicates that both intelligibility and reaction time are, in part, functions of the similarity between the talker's utterance and the listener's representation of the word.
Stand and deliver: the art of speaking in public.
Mariano, Carmen M
2002-01-01
Anyone who took Latin in high school learned that many English words are derived from Latin roots. One such word is auditorium. It comes from the Latin words audio, which means "to hear," and taurus, which means "the bull." Therefore, an auditorium is a place where people go to hear the bull. This article is written to change that. It is written to end the bull. Carmen Mariano has spoken before audiences in seven states and five countries. He has a master's degree from Harvard University and a doctorate from Boston College. Neither degree has cured Carmen of his Boston accent, but both have taught him something about the power of the spoken word. Carmen shares that knowledge, and that power, in this article. And that is no bull.
Neural Signatures of Language Co-activation and Control in Bilingual Spoken Word Comprehension
Chen, Peiyao; Bobb, Susan C.; Hoshino, Noriko; Marian, Viorica
2017-01-01
To examine the neural signatures of language co-activation and control during bilingual spoken word comprehension, Korean-English bilinguals and English monolinguals were asked to make overt or covert semantic relatedness judgments on auditorily-presented English word pairs. In two critical conditions, participants heard word pairs consisting of an English-Korean interlingual homophone (e.g., the sound /mu:n/ means “moon” in English and “door” in Korean) as the prime and an English word as the target. In the homophone-related condition, the target (e.g., “lock”) was related to the homophone’s Korean meaning, but not related to the homophone’s English meaning. In the homophone-unrelated condition, the target was unrelated to either the homophone’s Korean meaning or the homophone’s English meaning. In overtly responded situations, ERP results revealed that the reduced N400 effect in bilinguals for homophone-related word pairs correlated positively with the amount of their daily exposure to Korean. In covertly responded situations, ERP results showed a reduced late positive component for homophone-related word pairs in the right hemisphere, and this late positive effect was related to the neural efficiency of suppressing interference in a non-linguistic task. Together, these findings suggest 1) that the degree of language co-activation in bilingual spoken word comprehension is modulated by the amount of daily exposure to the non-target language; and 2) that bilinguals who are less influenced by cross-language activation may also have greater efficiency in suppressing interference in a non-linguistic task. PMID:28372943
Neural signatures of language co-activation and control in bilingual spoken word comprehension.
Chen, Peiyao; Bobb, Susan C; Hoshino, Noriko; Marian, Viorica
2017-06-15
To examine the neural signatures of language co-activation and control during bilingual spoken word comprehension, Korean-English bilinguals and English monolinguals were asked to make overt or covert semantic relatedness judgments on auditorily-presented English word pairs. In two critical conditions, participants heard word pairs consisting of an English-Korean interlingual homophone (e.g., the sound /mu:n/ means "moon" in English and "door" in Korean) as the prime and an English word as the target. In the homophone-related condition, the target (e.g., "lock") was related to the homophone's Korean meaning, but not related to the homophone's English meaning. In the homophone-unrelated condition, the target was unrelated to either the homophone's Korean meaning or the homophone's English meaning. In overtly responded situations, ERP results revealed that the reduced N400 effect in bilinguals for homophone-related word pairs correlated positively with the amount of their daily exposure to Korean. In covertly responded situations, ERP results showed a reduced late positive component for homophone-related word pairs in the right hemisphere, and this late positive effect was related to the neural efficiency of suppressing interference in a non-linguistic task. Together, these findings suggest 1) that the degree of language co-activation in bilingual spoken word comprehension is modulated by the amount of daily exposure to the non-target language; and 2) that bilinguals who are less influenced by cross-language activation may also have greater efficiency in suppressing interference in a non-linguistic task. Copyright © 2017 Elsevier B.V. All rights reserved.
Phonological and Semantic Knowledge Are Causal Influences on Learning to Read Words in Chinese
ERIC Educational Resources Information Center
Zhou, Lulin; Duff, Fiona J.; Hulme, Charles
2015-01-01
We report a training study that assesses whether teaching the pronunciation and meaning of spoken words improves Chinese children's subsequent attempts to learn to read the words. Teaching the pronunciations of words helps children to learn to read those same words, and teaching the pronunciations and meanings improves learning still further.…
Hunter Adams, Jo; Penrose, Katherine L.; Cochran, Jennifer; Rybin, Denis; Doros, Gheorghe; Henshaw, Michelle; Paasche-Orlow, Michael
2013-01-01
Background This study investigated the impact of English health literacy and spoken proficiency and acculturation on preventive dental care use among Somali refugees in Massachusetts. Methods 439 adult Somalis in the U.S. ≤ 10 years ago were interviewed. English functional health literacy, dental word recognition, and spoken proficiency were measured using STOFHLA, REALD, and BEST Plus. Logistic regression tested associations of language measures with preventive dental care use. Results Without controlling for acculturation, participants with higher health literacy were 2.0 times more likely to have had preventive care (p=0.02). Subjects with higher word recognition were 1.8 times as likely to have had preventive care (p=0.04). Controlling for acculturation, these were no longer significant, and spoken proficiency was not associated with increased preventive care use. Discussion English health literacy and spoken proficiency were not associated with preventive dental care. Other factors, like acculturation, were more predictive of care use than language skills. PMID:23748902
Does Talker-Specific Information Influence Lexical Competition? Evidence from Phonological Priming
ERIC Educational Resources Information Center
Dufour, Sophie; Nguyen, Noël
2017-01-01
In this study, we examined whether the lexical competition process embraced by most models of spoken word recognition is sensitive to talker-specific information. We used a lexical decision task and a long lag priming experiment in which primes and targets sharing all phonemes except the last one (e.g., /bagaR/"fight" vs.…
Deviant ERP Response to Spoken Non-Words among Adolescents Exposed to Cocaine in Utero
ERIC Educational Resources Information Center
Landi, Nicole; Crowley, Michael J.; Wu, Jia; Bailey, Christopher A.; Mayes, Linda C.
2012-01-01
Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of…
The Fluid Reading Primer: Animated Decoding Support for Emergent Readers.
ERIC Educational Resources Information Center
Zellweger, Polle T.; Mackinlay, Jock D.
A prototype application called the Fluid Reading Primer was developed to help emergent readers with the process of decoding written words into their spoken forms. The Fluid Reading Primer is part of a larger research project called Fluid Documents, which is exploring the use of interactive animation of typography to show additional information in…
ERIC Educational Resources Information Center
Mirman, Daniel; Yee, Eiling; Blumstein, Sheila E.; Magnuson, James S.
2011-01-01
We used eye-tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., "carrot-parrot") and cohort (e.g., "beaker-beetle") competitors. Broca's aphasic participants exhibited larger rhyme competition effects than age-matched controls. A re-analysis of previously reported data (Yee,…
Decreased Sensitivity to Phonemic Mismatch in Spoken Word Processing in Adult Developmental Dyslexia
ERIC Educational Resources Information Center
Janse, Esther; de Bree, Elise; Brouwer, Susanne
2010-01-01
Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as "procodile for crocodile") for the atypical population of dyslexic adults to see to what…
ERIC Educational Resources Information Center
Ng, Shukhan; Payne, Brennan R.; Stine-Morrow, Elizabeth A. L.; Federmeier, Kara D.
2018-01-01
We investigated how struggling adult readers make use of sentence context to facilitate word processing when comprehending spoken language, conditions under which print decoding is not a barrier to comprehension. Stimuli were strongly and weakly constraining sentences (as measured by cloze probability), which ended with the most expected word…
ERIC Educational Resources Information Center
Shtyrov, Yury; Smith, Marie L.; Horner, Aidan J.; Henson, Richard; Nathan, Pradeep J.; Bullmore, Edward T.; Pulvermuller, Friedemann
2012-01-01
Previous research indicates that, under explicit instructions to listen to spoken stimuli or in speech-oriented behavioural tasks, the brain's responses to senseless pseudowords are larger than those to meaningful words; the reverse is true in non-attended conditions. These differential responses could be used as a tool to trace linguistic…
Effects of prosody and position on the timing of deictic gestures.
Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M; Szuminsky, Neil
2013-04-01
In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. The authors manipulated syllable position and contrastive stress in compound words in multiword utterances by using a repeated-measures design to investigate the degree of synchronization of speech and pointing gestures produced by 15 American English speakers. Acoustic measures were compared with the gesture movement recorded via capacitance. Although most participants began a gesture before the target word, the temporal parameters of the gesture changed as a function of syllable position and prosody. Syllables with contrastive stress in the 2nd position of compound words were the longest in duration and also most consistently affected the timing of gestures, as measured by several dependent measures. Increasing the stress of a syllable significantly affected the timing of a corresponding gesture, notably for syllables in the 2nd position of words that would not typically be stressed. The findings highlight the need to consider the interaction of gestures and spoken language production from a motor-based perspective of coordination.
The word-length effect and disyllabic words.
Lovatt, P; Avons, S E; Masterson, J
2000-02-01
Three experiments compared immediate serial recall of disyllabic words that differed on spoken duration. Two sets of long- and short-duration words were selected, in each case maximizing duration differences but matching for frequency, familiarity, phonological similarity, and number of phonemes, and controlling for semantic associations. Serial recall measures were obtained using auditory and visual presentation and spoken and picture-pointing recall. In Experiments 1a and 1b, using the first set of items, long words were better recalled than short words. In Experiments 2a and 2b, using the second set of items, no difference was found between long and short disyllabic words. Experiment 3 confirmed the large advantage for short-duration words in the word set originally selected by Baddeley, Thomson, and Buchanan (1975). These findings suggest that there is no reliable advantage for short-duration disyllables in span tasks, and that previous accounts of a word-length effect in disyllables are based on accidental differences between list items. The failure to find an effect of word duration casts doubt on theories that propose that the capacity of memory span is determined by the duration of list items or the decay rate of phonological information in short-term memory.
Lexical access in sign language: a computational model
Caselli, Naomi K.; Cohen-Goldberg, Ariel M.
2014-01-01
Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition. PMID:24860539
Language-driven anticipatory eye movements in virtual reality.
Eichert, Nicole; Peeters, David; Hagoort, Peter
2018-06-01
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.
Cognitive aging and hearing acuity: modeling spoken language comprehension.
Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda
2015-01-01
The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.
Lexical frequency and acoustic reduction in spoken Dutch
NASA Astrophysics Data System (ADS)
Pluymaekers, Mark; Ernestus, Mirjam; Baayen, R. Harald
2005-10-01
This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.
Complex network structure influences processing in long-term and short-term memory.
Vitevitch, Michael S; Chan, Kit Ying; Roodenrys, Steven
2012-07-01
Complex networks describe how entities in systems interact; the structure of such networks is argued to influence processing. One measure of network structure, clustering coefficient, C, measures the extent to which neighbors of a node are also neighbors of each other. Previous psycholinguistic experiments found that the C of phonological word-forms influenced retrieval from the mental lexicon (that portion of long-term memory dedicated to language) during the on-line recognition and production of spoken words. In the present study we examined how network structure influences other retrieval processes in long- and short-term memory. In a false-memory task-examining long-term memory-participants falsely recognized more words with low- than high-C. In a recognition memory task-examining veridical memories in long-term memory-participants correctly recognized more words with low- than high-C. However, participants in a serial recall task-examining redintegration in short-term memory-recalled lists comprised of high-C words more accurately than lists comprised of low-C words. These results demonstrate that network structure influences cognitive processes associated with several forms of memory including lexical, long-term, and short-term.
de Lira, Juliana Onofre; Minett, Thaís Soares Cianciarullo; Bertolucci, Paulo Henrique Ferreira; Ortiz, Karin Zazo
2014-01-01
Alzheimer's disease (AD) is characterized by impairments in memory and other cognitive functions such as language, which can be affected in all aspects including discourse. A picture description task is considered an effective way of obtaining a discourse sample whose key feature is the ability to retrieve appropriate lexical items. There is no consensus on findings showing that performance in content processing of spoken discourse deteriorates from the mildest phase of AD. To compare the quantity and quality of discourse among patients with mild to moderate AD and controls. A cross-sectional study was designed. Subjects aged 50 years and older of both sexes, with one year or more of education, were divided into three groups: control (CG), mild AD (ADG1) and moderate AD (ADG2). Participants were asked to describe the "cookie theft" picture. The total number of complete words spoken and information units (IU) were included in the analysis. There was no significant difference among groups in terms of age, schooling and sex. For number of words spoken, the CG performed significantly better than both the ADG 1 and ADG2, but no difference between the two latter groups was found. CG produced almost twice as many information units as the ADG1 and more than double that of the ADG2. Moreover, ADG2 patients had worse performance on IUs compared to the ADG1. Decreased performance in quantity and content of discourse was evident in patients with AD from the mildest phase, but only content (IU) continued to worsen with disease progression.
de Lira, Juliana Onofre; Minett, Thaís Soares Cianciarullo; Bertolucci, Paulo Henrique Ferreira; Ortiz, Karin Zazo
2014-01-01
Alzheimer's disease (AD) is characterized by impairments in memory and other cognitive functions such as language, which can be affected in all aspects including discourse. A picture description task is considered an effective way of obtaining a discourse sample whose key feature is the ability to retrieve appropriate lexical items. There is no consensus on findings showing that performance in content processing of spoken discourse deteriorates from the mildest phase of AD. Objective To compare the quantity and quality of discourse among patients with mild to moderate AD and controls. Methods A cross-sectional study was designed. Subjects aged 50 years and older of both sexes, with one year or more of education, were divided into three groups: control (CG), mild AD (ADG1) and moderate AD (ADG2). Participants were asked to describe the "cookie theft" picture. The total number of complete words spoken and information units (IU) were included in the analysis. Results There was no significant difference among groups in terms of age, schooling and sex. For number of words spoken, the CG performed significantly better than both the ADG 1 and ADG2, but no difference between the two latter groups was found. CG produced almost twice as many information units as the ADG1 and more than double that of the ADG2. Moreover, ADG2 patients had worse performance on IUs compared to the ADG1. Conclusion Decreased performance in quantity and content of discourse was evident in patients with AD from the mildest phase, but only content (IU) continued to worsen with disease progression. PMID:29213912
When does word frequency influence written production?
Baus, Cristina; Strijkers, Kristof; Costa, Albert
2013-01-01
The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution.
When does word frequency influence written production?
Baus, Cristina; Strijkers, Kristof; Costa, Albert
2013-01-01
The aim of the present study was to explore the central (e.g., lexical processing) and peripheral processes (motor preparation and execution) underlying word production during typewriting. To do so, we tested non-professional typers in a picture typing task while continuously recording EEG. Participants were instructed to write (by means of a standard keyboard) the corresponding name for a given picture. The lexical frequency of the words was manipulated: half of the picture names were of high-frequency while the remaining were of low-frequency. Different measures were obtained: (1) first keystroke latency and (2) keystroke latency of the subsequent letters and duration of the word. Moreover, ERPs locked to the onset of the picture presentation were analyzed to explore the temporal course of word frequency in typewriting. The results showed an effect of word frequency for the first keystroke latency but not for the duration of the word or the speed to which letter were typed (interstroke intervals). The electrophysiological results showed the expected ERP frequency effect at posterior sites: amplitudes for low-frequency words were more positive than those for high-frequency words. However, relative to previous evidence in the spoken modality, the frequency effect appeared in a later time-window. These results demonstrate two marked differences in the processing dynamics underpinning typing compared to speaking: First, central processing dynamics between speaking and typing differ already in the manner that words are accessed; second, central processing differences in typing, unlike speaking, do not cascade to peripheral processes involved in response execution. PMID:24399980
Children reading spoken words: interactions between vocabulary and orthographic expectancy.
Wegener, Signy; Wang, Hua-Chen; de Lissa, Peter; Robidoux, Serje; Nation, Kate; Castles, Anne
2018-05-01
There is an established association between children's oral vocabulary and their word reading but its basis is not well understood. Here, we present evidence from eye movements for a novel mechanism underlying this association. Two groups of 18 Grade 4 children received oral vocabulary training on one set of 16 novel words (e.g., 'nesh', 'coib'), but no training on another set. The words were assigned spellings that were either predictable from phonology (e.g., nesh) or unpredictable (e.g., koyb). These were subsequently shown in print, embedded in sentences. Reading times were shorter for orally familiar than unfamiliar items, and for words with predictable than unpredictable spellings but, importantly, there was an interaction between the two: children demonstrated a larger benefit of oral familiarity for predictable than for unpredictable items. These findings indicate that children form initial orthographic expectations about spoken words before first seeing them in print. A video abstract of this article can be viewed at: https://youtu.be/jvpJwpKMM3E. © 2017 John Wiley & Sons Ltd.
A cascaded neuro-computational model for spoken word recognition
NASA Astrophysics Data System (ADS)
Hoya, Tetsuya; van Leeuwen, Cees
2010-03-01
In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.
Wang, Jin; Joanisse, Marc F; Booth, James R
2018-04-01
The left ventral occipitotemporal cortex (vOT) is important in visual word recognition. Studies have shown that the left vOT is generally observed to be involved in spoken language processing in skilled readers, suggesting automatic access to corresponding orthographic information. However, little is known about where and how the left vOT is involved in the spoken language processing of young children with emerging reading ability. In order to answer this question, we examined the relation of reading ability in 5-6-year-old kindergarteners to the activation of vOT during an auditory phonological awareness task. Two experimental conditions: onset word pairs that shared the first phoneme and rhyme word pairs that shared the final biphone/triphone, were compared to allow a measurement of vOT's activation to small (i.e., onsets) and large grain sizes (i.e., rhymes). We found that higher reading ability was associated with better accuracy of the onset, but not the rhyme, condition. In addition, higher reading ability was only associated with greater sensitivity in the posterior left vOT for the contrast of the onset versus rhyme condition. These results suggest that acquisition of reading results in greater specialization of the posterior vOT to smaller rather than larger grain sizes in young children. Copyright © 2018. Published by Elsevier Ltd.
Influences of High and Low Variability on Infant Word Recognition
ERIC Educational Resources Information Center
Singh, Leher
2008-01-01
Although infants begin to encode and track novel words in fluent speech by 7.5 months, their ability to recognize words is somewhat limited at this stage. In particular, when the surface form of a word is altered, by changing the gender or affective prosody of the speaker, infants begin to falter at spoken word recognition. Given that natural…
Locus of Word Frequency Effects in Spelling to Dictation: Still at the Orthographic Level!
ERIC Educational Resources Information Center
Bonin, Patrick; Laroche, Betty; Perret, Cyril
2016-01-01
The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological…
ERP evidence for implicit L2 word stress knowledge in listeners of a fixed-stress language.
Kóbor, Andrea; Honbolygó, Ferenc; Becker, Angelika B C; Schild, Ulrike; Csépe, Valéria; Friedrich, Claudia K
2018-06-01
Languages with contrastive stress, such as English or German, distinguish some words only via the stress status of their syllables, such as "CONtent" and "conTENT" (capitals indicate a stressed syllable). Listeners with a fixed-stress native language, such as Hungarian, have difficulties in explicitly discriminating variation of the stress position in a second language (L2). However, Event-Related Potentials (ERPs) indicate that Hungarian listeners implicitly notice variation from their native fixed-stress pattern. Here we used ERPs to investigate Hungarian listeners' implicit L2 processing. In a cross-modal word fragment priming experiment, we presented spoken stressed and unstressed German word onsets (primes) followed by printed versions of initially stressed and initially unstressed German words (targets). ERPs reflected stress priming exerted by both prime types. This indicates that Hungarian listeners implicitly linked German words with the stress status of the primes. Thus, the formerly described explicit stress discrimination difficulty associated with a fixed-stress native language does not generalize to implicit aspects of L2 word stress processing. Copyright © 2018 Elsevier B.V. All rights reserved.
Hurtado, Nereyda; Marchman, Virginia A.; Fernald, Anne
2010-01-01
It is well established that variation in caregivers' speech is associated with language outcomes, yet little is known about the learning principles that mediate these effects. This longitudinal study (n = 27) explores whether Spanish-learning children's early experiences with language predict efficiency in real-time comprehension and vocabulary learning. Measures of mothers' speech at 18 months were examined in relation to children's speech processing efficiency and reported vocabulary at 18 and 24 months. Children of mothers who provided more input at 18 months knew more words and were faster in word recognition at 24 months. Moreover, multiple regression analyses indicated that the influences of caregiver speech on speed of word recognition and vocabulary were largely overlapping. This study provides the first evidence that input shapes children's lexical processing efficiency and that vocabulary growth and increasing facility in spoken word comprehension work together to support the uptake of the information that rich input affords the young language learner. PMID:19046145
Spoken Idiom Recognition: Meaning Retrieval and Word Expectancy
ERIC Educational Resources Information Center
Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou
2005-01-01
This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…
The employment of a spoken language computer applied to an air traffic control task.
NASA Technical Reports Server (NTRS)
Laveson, J. I.; Silver, C. A.
1972-01-01
Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.
Gender differences in the activation of inferior frontal cortex during emotional speech perception.
Schirmer, Annett; Zysset, Stefan; Kotz, Sonja A; Yves von Cramon, D
2004-03-01
We investigated the brain regions that mediate the processing of emotional speech in men and women by presenting positive and negative words that were spoken with happy or angry prosody. Hence, emotional prosody and word valence were either congruous or incongruous. We assumed that an fRMI contrast between congruous and incongruous presentations would reveal the structures that mediate the interaction of emotional prosody and word valence. The left inferior frontal gyrus (IFG) was more strongly activated in incongruous as compared to congruous trials. This difference in IFG activity was significantly larger in women than in men. Moreover, the congruence effect was significant in women whereas it only appeared as a tendency in men. As the left IFG has been repeatedly implicated in semantic processing, these findings are taken as evidence that semantic processing in women is more susceptible to influences from emotional prosody than is semantic processing in men. Moreover, the present data suggest that the left IFG mediates increased semantic processing demands imposed by an incongruence between emotional prosody and word valence.
Noah, J Adam; Dravida, Swethasri; Zhang, Xian; Yahil, Shaul; Hirsch, Joy
2017-01-01
The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words ("yes" and "no") were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down), or head shaking (side-to-side), thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words "red" and "green" in either red or green font and were asked to identify the color of the letters. We observed a classic "Stroop" behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS) were increased in the right dorsolateral prefrontal cortex (DLPFC) for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ), superior temporal gyrus (STG), supramarginal gyrus (SMG), and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to the left DLPFC and right TPJ including adjacent homologous receptive language areas were engaged when processing conflicting communications. These findings contribute to an emerging view of specialization within the TPJ and adjacent areas for interpretation of social cues and indicate a role for the region in processing social conflict.
Bullock-Rest, Natasha; Cerny, Alissa; Sweeney, Carol; Palumbo, Carole; Kurowski, Kathleen; Blumstein, Sheila E
2013-08-01
Previous behavioral work has shown that the phonetic realization of words in spoken word production is influenced by sound shape properties of the lexicon. A recent fMRI study (Peramunage, Blumstein, Myers, Goldrick, & Baese-Berk, 2011) showed that this influence of lexical structure on phonetic implementation recruited a network of areas that included the supramarginal gyrus (SMG) extending into the posterior superior temporal gyrus (pSTG) and the inferior frontal gyrus (IFG). The current study examined whether lesions in these areas result in a concomitant functional deficit. Ten individuals with aphasia and 8 normal controls read words aloud in which half had a voiced stop consonant minimal pair (e.g. tame; dame), and the other half did not (e.g. tooth; (*)dooth). Voice onset time (VOT) analysis of the initial voiceless stop consonant revealed that aphasic participants with lesions including the IFG and/or the SMG behaved as did normals, showing VOT lengthening effects for minimal pair words compared to non-minimal pair words. The failure to show a functional deficit in the production of VOT as a function of the lexical properties of a word with damage in the IFG or SMG suggests that fMRI findings do not always predict effects of lesions on behavioral deficits in aphasia. Nonetheless, the pattern of production errors made by the aphasic participants did reflect properties of the lexicon, supporting the view that the SMG and IFG are part of a lexical network involved in spoken word production. Copyright © 2013 Elsevier Inc. All rights reserved.
An Analysis of the Most Frequently Occurring Words in Spoken American English.
ERIC Educational Resources Information Center
Plant, Geoff
1999-01-01
A study analyzed frequency of occurrence of consonants, vowels, and diphthongs, syllabic structure of the words, and segmental structure of the 311 monosyllabic words of 500 words that occur most frequently in English. Three mannerisms of articulation accounted for nearly 75 percent of all consonant occurrences: stops, semi-vowels, and nasals.…
Interference Effects on the Recall of Pictures, Printed Words and Spoken Words.
ERIC Educational Resources Information Center
Burton, John K.; Bruning, Roger H.
Thirty college undergraduates participated in a study of the effects of acoustic and visual interference on the recall of word and picture triads in both short-term and long-term memory. The subjects were presented 24 triads of monosyllabic nouns representing all of the possible combinations of presentation types: pictures, printed words, and…
Does Hearing Several Speakers Reduce Foreign Word Learning?
ERIC Educational Resources Information Center
Ludington, Jason Darryl
2016-01-01
Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…
Modeling of Word Translation: Activation Flow from Concepts to Lexical Items
ERIC Educational Resources Information Center
Roelofs, Ardi; Dijkstra, Ton; Gerakaki, Svetlana
2013-01-01
Whereas most theoretical and computational models assume a continuous flow of activation from concepts to lexical items in spoken word production, one prominent model assumes that the mapping of concepts onto words happens in a discrete fashion (Bloem & La Heij, 2003). Semantic facilitation of context pictures on word translation has been taken to…
ERIC Educational Resources Information Center
Flanigan, Kevin
2006-01-01
This article focuses on a concept that has rarely been studied in beginning reading research--a child's concept of word in text. Recent examinations of this phenomenon suggest that a child's ability to match spoken words to written words while reading--a concept of word in text--plays a pivotal role in early reading development. In this article,…
Changes in N400 Topography Following Intensive Speech Language Therapy for Individuals with Aphasia
ERIC Educational Resources Information Center
Wilson, K. Ryan; O'Rourke, Heather; Wozniak, Linda A.; Kostopoulos, Ellina; Marchand, Yannick; Newman, Aaron J.
2012-01-01
Our goal was to characterize the effects of intensive aphasia therapy on the N400, an electrophysiological index of lexical-semantic processing. Immediately before and after 4 weeks of intensive speech-language therapy, people with aphasia performed a task in which they had to determine whether spoken words were a "match" or a "mismatch" to…
ERIC Educational Resources Information Center
Grogan, A.; Parker Jones, O.; Ali, N.; Crinion, J.; Orabona, S.; Mechias, M. L.; Ramsden, S.; Green, D. W.; Price, C. J.
2012-01-01
We used structural magnetic resonance imaging (MRI) and voxel based morphometry (VBM) to investigate whether the efficiency of word processing in the non-native language (lexical efficiency) and the number of non-native languages spoken (2+ versus 1) were related to local differences in the brain structure of bilingual and multilingual speakers.…
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Padilla, Dionis A.; Fausto, Janette C.; Vergara, Ernesto M.; Garcia, Ramon G.; Delos Angeles, Bethsedea Joy S.; Dizon, Neil John A.; Mardo, Mark Kevin N.
2017-02-01
This research is about translating series of hand gesture to form a word and produce its equivalent sound on how it is read and said in Filipino accent using Support Vector Machine and Mel Frequency Cepstral Coefficient analysis. The concept is to detect Filipino speech input and translate the spoken words to their text form in Filipino. This study is trying to help the Filipino deaf community to impart their thoughts through the use of hand gestures and be able to communicate to people who do not know how to read hand gestures. This also helps literate deaf to simply read the spoken words relayed to them using the Filipino speech to text system.
Lexical restructuring in the absence of literacy.
Venturaa, Paulo; Kolinsky, Régine; Fernandesa, Sandra; Queridoa, Luís; Morais, José
2007-11-01
Vocabulary growth was suggested to prompt the implementation of increasingly finer-grained lexical representations of spoken words in children (e.g., [Metsala, J. L., & Walley, A. C. (1998). Spoken vocabulary growth and the segmental restructuring of lexical representations: precursors to phonemic awareness and early reading ability. In J. L. Metsala & L. C. Ehri (Eds.), Word recognition in beginning literacy (pp. 89-120). Hillsdale, NJ: Erlbaum.]). Although literacy was not explicitly mentioned in this lexical restructuring hypothesis, the process of learning to read and spell might also have a significant impact on the specification of lexical representations (e.g., [Carroll, J. M., & Snowling, M. J. (2001). The effects of global similarity between stimuli on children's judgments of rime and alliteration. Applied Psycholinguistics, 22, 327-342.]; [Goswami, U. (2000). Phonological representations, reading development and dyslexia: Towards a cross-linguistic theoretical framework. Dyslexia, 6, 133-151.]). This is what we checked in the present study. We manipulated word frequency and neighborhood density in a gating task (Experiment 1) and a word-identification-in-noise task (Experiment 2) presented to Portuguese literate and illiterate adults. Ex-illiterates were also tested in Experiment 2 in order to disentangle the effects of vocabulary size and literacy. There was an interaction between word frequency and neighborhood density, which was similar in the three groups. These did not differ even for the words that are supposed to undergo lexical restructuring the latest (low frequency words from sparse neighborhoods). Thus, segmental lexical representations seem to develop independently of literacy. While segmental restructuring is not affected by literacy, it constrains the development of phoneme awareness as shown by the fact that, in Experiment 3, neighborhood density modulated the phoneme deletion performance of both illiterates and ex-illiterates.
Mirman, Daniel; Yee, Eiling; Blumstein, Sheila E.; Magnuson, James S.
2011-01-01
We used eye tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot – parrot) and cohort (e.g., beaker – beetle) competitors. Broca’s aphasic participants exhibited larger rhyme competition effects than age-matched controls. A reanalysis of previously reported data (Yee, Blumstein, & Sedivy, 2008) confirmed that Wernicke’s aphasic participants exhibited larger cohort competition effects. Individual-level analyses revealed a negative correlation between rhyme and cohort competition effect size across both groups of aphasic participants. Computational model simulations were performed to examine which of several accounts of lexical processing deficits in aphasia might account for the observed effects. Simulation results revealed that slower deactivation of lexical competitors could account for increased cohort competition in Wernicke’s aphasic participants; auditory perceptual impairment could account for increased rhyme competition in Broca's aphasic participants; and a perturbation of a parameter controlling selection among competing alternatives could account for both patterns, as well as the correlation between the effects. In light of these simulation results, we discuss theoretical accounts that have the potential to explain the dynamics of spoken word recognition in aphasia and the possible roles of anterior and posterior brain regions in lexical processing and cognitive control. PMID:21371743
Enhancing Vowel Discrimination Using Constructed Spelling
ERIC Educational Resources Information Center
Stewart, Katherine; Hayashi, Yusuke; Saunders, Kathryn
2010-01-01
In a computerized task, an adult with intellectual disabilities learned to construct consonant-vowel-consonant words in the presence of corresponding spoken words. During the initial assessment, the participant demonstrated high accuracy on one word group (containing the vowel-consonant units "it" and "un") but low accuracy on the other group…
Using Signs to Facilitate Vocabulary in Children with Language Delays
ERIC Educational Resources Information Center
Lederer, Susan Hendler; Battaglia, Dana
2015-01-01
The purpose of this article is to explore recommended practices in choosing and using key word signs (i.e., simple single-word gestures for communication) to facilitate first spoken words in hearing children with language delays. Developmental, theoretical, and empirical supports for this practice are discussed. Practical recommendations for…
Immediate effects of form-class constraints on spoken word recognition
Magnuson, James S.; Tanenhaus, Michael K.; Aslin, Richard N.
2008-01-01
In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar “nouns” and “adjectives” did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration. PMID:18675408
Takashima, Atsuko; Bakker, Iske; van Hell, Janet G; Janzen, Gabriele; McQueen, James M
2017-04-01
When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems. Copyright © 2016 Elsevier Inc. All rights reserved.
An eye movement corpus study of the age-of-acquisition effect.
Dirix, Nicolas; Duyck, Wouter
2017-12-01
In the present study, we investigated the effects of word-level age of acquisition (AoA) on natural reading. Previous studies, using multiple language modalities, showed that earlier-learned words are recognized, read, spoken, and responded to faster than words learned later in life. Until now, in visual word recognition the experimental materials were limited to single-word or sentence studies. We analyzed the data of the Ghent Eye-tracking Corpus (GECO; Cop, Dirix, Drieghe, & Duyck, in press), an eyetracking corpus of participants reading an entire novel, resulting in the first eye movement megastudy of AoA effects in natural reading. We found that the ages at which specific words were learned indeed influenced reading times, above other important (correlated) lexical variables, such as word frequency and length. Shorter fixations for earlier-learned words were consistently found throughout the reading process, in both early (single-fixation durations, first-fixation durations, gaze durations) and late (total reading times) measures. Implications for theoretical accounts of AoA effects and eye movements are discussed.
Spoken Grammar Practice and Feedback in an ASR-Based CALL System
ERIC Educational Resources Information Center
de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland
2015-01-01
Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…
ERIC Educational Resources Information Center
Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna
2013-01-01
Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…
Yoder, Paul; Stone, Wendy L
2006-08-01
This randomized group experiment compared the efficacy of 2 communication interventions (Responsive Education and Prelinguistic Milieu Teaching [RPMT] and the Picture Exchange Communication System [PECS]) on spoken communication in 36 preschoolers with autism spectrum disorders (ASD). Each treatment was delivered to children for a maximum total of 24 hr over a 6-month period. Spoken communication was assessed in a rigorous test of generalization at pretreatment, posttreatment, and 6-month follow-up periods. PECS was more successful than RPMT in increasing the number of nonimitative spoken communication acts and the number of different nonimitative words used at the posttreatment period. Considering growth over all 3 measurement periods, an exploratory analysis showed that growth rate of the number of different nonimitative words was faster in the PECS group than in the RPMT group for children who began treatment with relatively high object exploration. In contrast, analogous slopes were steeper in the RPMT group than in the PECS group for children who began treatment with relatively low object exploration.
Moers, Cornelia; Meyer, Antje; Janse, Esther
2017-06-01
High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups-younger children (8-12 years), adolescents (12-18 years) and older (62-95 years) Dutch speakers-show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.
Aparicio, Mario; Peigneux, Philippe; Charlier, Brigitte; Balériaux, Danielle; Kavec, Martin; Leybaert, Jacqueline
2017-01-01
We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl’s gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication. PMID:28424636
Kang, Sean H K; Gollan, Tamar H; Pashler, Harold
2013-12-01
Second language (L2) instruction programs often ask learners to repeat aloud words spoken by a native speaker. However, recent research on retrieval practice has suggested that imitating native pronunciation might be less effective than drill instruction, wherein the learner is required to produce the L2 words from memory (and given feedback). We contrasted the effectiveness of imitation and retrieval practice drills on learning L2 spoken vocabulary. Learners viewed pictures of objects and heard their names; in the imitation condition, they heard and then repeated aloud each name, whereas in the retrieval practice condition, they tried to produce the name before hearing it. On a final test administered either immediately after training (Exp. 1) or after a 2-day delay (Exp. 2), retrieval practice produced better comprehension of the L2 words, better ability to produce the L2 words, and no loss of pronunciation quality.
Rank-frequency distributions of Romanian words
NASA Astrophysics Data System (ADS)
Cocioceanu, Adrian; Raportaru, Carina Mihaela; Nicolin, Alexandru I.; Jakimovski, Dragan
2017-12-01
The calibration of voice biometrics solutions requires detailed analyses of spoken texts and in this context we investigate by computational means the rank-frequency distributions of Romanian words and word series to determine the most common words and word series of the language. To this end, we have constructed a corpus of approximately 2.5 million words and then determined that the rank-frequency distributions of the Romanian words, as well as series of two, and three subsequent words, obey the celebrated Zipf law.
Rinaldi, Pasquale; Barca, Laura; Burani, Cristina
2004-08-01
The CFVlexvar.xls database includes imageability, frequency, and grammatical properties of the first words acquired by Italian children. For each of 519 words that are known by children 18-30 months of age (taken from Caselli & Casadio's, 1995, Italian version of the MacArthur Communicative Development Inventory), new values of imageability are provided and values for age of acquisition, child written frequency, and adult written and spoken frequency are included. In this article, correlations among the variables are discussed and the words are grouped into grammatical categories. The results show that words acquired early have imageable referents, are frequently used in the texts read and written by elementary school children, and are frequent in adult written and spoken language. Nouns are acquired earlier and are more imageable than both verbs and adjectives. The composition in grammatical categories of the child's first vocabulary reflects the composition of adult vocabulary. The full set of these norms can be downloaded from www.psychonomic.org/archive/.
The Neural Basis of Competition in Auditory Word Recognition and Spoken Word Production
ERIC Educational Resources Information Center
Righi, Giulia
2010-01-01
The goal of this dissertation is to examine how brain regions respond to different types of competition during word comprehension and word production. I will present three studies that attempt to enhance the current understanding of which brain regions are sensitive to different aspects of competition and how the nature of the stimuli and the…
Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar
2015-12-01
The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Piai, Vitória; Roelofs, Ardi; Maris, Eric
2014-01-01
Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production. © 2013 Published by Elsevier Ltd.
Using Wordle as a Supplementary Research Tool
ERIC Educational Resources Information Center
McNaught, Carmel; Lam, Paul
2010-01-01
A word cloud is a special visualization of text in which the more frequently used words are effectively highlighted by occupying more prominence in the representation. We have used Wordle to produce word-cloud analyses of the spoken and written responses of informants in two research projects. The product demonstrates a fast and visually rich way…
Spoken Word Recognition and Serial Recall of Words from Components in the Phonological Network
ERIC Educational Resources Information Center
Siew, Cynthia S. Q.; Vitevitch, Michael S.
2016-01-01
Network science uses mathematical techniques to study complex systems such as the phonological lexicon (Vitevitch, 2008). The phonological network consists of a "giant component" (the largest connected component of the network) and "lexical islands" (smaller groups of words that are connected to each other, but not to the giant…
The Exception Does Not Rule: Attention Constrains Form Preparation in Word Production
ERIC Educational Resources Information Center
O'Séaghdha, Pádraig G.; Frazer, Alexandra K.
2014-01-01
Form preparation in word production, the benefit of exploiting a useful common sound (such as the first phoneme) of iteratively spoken small groups of words, is notoriously fastidious, exhibiting a seemingly categorical, all-or-none character and a corresponding susceptibility to "killers" of preparation. In particular, the presence of a…
AUDITORY DISCRIMINATION TRAINING IN THE DEVELOPMENT OF WORD ANALYSIS SKILLS.
ERIC Educational Resources Information Center
COLEMAN, JAMES C.; MCNEIL, JOHN D.
THE HYPOTHESIS THAT CHILDREN WHO ARE TAUGHT TO HEAR AND DESIGNATE SEPARATE SOUNDS IN SPOKEN WORDS WILL ACHIEVE GREATER SUCCESS IN LEARNING TO ANALYZE PRINTED WORDS WAS TESTED. THE SUBJECTS WERE 90 KINDERGARTEN CHILDREN, PREDOMINATELY MEXICAN-AMERICANS AND NEGROES. CHILDREN WERE RANDOMLY ASSIGNED TO ONE OF THREE TREATMENTS, EACH OF 3-WEEKS DURATION…
Speaker variability augments phonological processing in early word learning
Rost, Gwyneth C.; McMurray, Bob
2010-01-01
Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e., word pairs that differ by a single phoneme), despite the ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top-down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom-up acoustic-phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single-speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them. PMID:19143806
Balthasar, Andrea J R; Huber, Walter; Weis, Susanne
2011-09-02
Homonym processing in German is of theoretical interest as homonyms specifically involve word form information. In a previous study (Weis et al., 2001), we found inferior parietal activation as a correlate of successfully finding a homonym from written stimuli. The present study tries to clarify the underlying mechanism and to examine to what extend the previous homonym effect is dependent on visual in contrast to auditory input modality. 18 healthy subjects were examined using an event-related functional magnetic resonance imaging paradigm. Participants had to find and articulate a homonym in relation to two spoken or written words. A semantic-lexical task - oral naming from two-word definitions - was used as a control condition. When comparing brain activation for solved homonym trials to both brain activation for unsolved homonyms and solved definition trials we obtained two activations patterns, which characterised both auditory and visual processing. Semantic-lexical processing was related to bilateral inferior frontal activation, whereas left inferior parietal activation was associated with finding the correct homonym. As the inferior parietal activation during successful access to the word form of a homonym was independent of input modality, it might be the substrate of access to word form knowledge. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Segal, Osnat; Kishon-Rabin, Liat
2017-01-01
Purpose: The stressed word in a sentence (narrow focus [NF]) conveys information about the intent of the speaker and is therefore important for processing spoken language and in social interactions. The ability of participants with severe-to-profound prelingual hearing loss to comprehend NF has rarely been investigated. The purpose of this study…
Influence of Eye Gaze on Spoken Word Processing: An ERP Study with Infants
ERIC Educational Resources Information Center
Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Friederici, Angela D.
2011-01-01
Eye gaze is an important communicative signal, both as mutual eye contact and as referential gaze to objects. To examine whether attention to speech versus nonspeech stimuli in 4- to 5-month-olds (n = 15) varies as a function of eye gaze, event-related brain potentials were used. Faces with mutual or averted gaze were presented in combination with…
Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success
Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.
2013-01-01
The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625
Gow, David W; Olson, Bruna B
2015-07-01
Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.
Gow, David W.; Olson, Bruna B.
2015-01-01
Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical “gang effects” in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account. PMID:25883413
Corpus Based Authenicity Analysis of Language Teaching Course Books
ERIC Educational Resources Information Center
Peksoy, Emrah; Harmaoglu, Özhan
2017-01-01
In this study, the resemblance of the language learning course books used in Turkey to authentic language spoken by native speakers is explored by using a corpus-based approach. For this, the 10-million-word spoken part of the British National Corpus was selected as reference corpus. After that, all language learning course books used in high…
ERIC Educational Resources Information Center
Berent, Iris
2008-01-01
Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether…
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
Hunter, Cynthia R
2016-10-01
Adult aging is associated with decreased accuracy for recognizing speech, particularly in noisy backgrounds and for high neighborhood density words, which sound similar to many other words. In the current study, the time course of neighborhood density effects in young and older adults was compared using event-related potentials (ERP) and behavioral responses in a lexical decision task for spoken words and nonwords presented either in quiet or in noise. Target items sounded similar either to many or to few other words (neighborhood density) but were balanced for the frequency of their component sounds (phonotactic probability). Behavioral effects of density were similar across age groups, but the event-related potential effects of density differed as a function of age group. For young adults, density modulated the amplitude of both the N400 and the later P300 or late positive complex (LPC). For older adults, density modulated only the amplitude of the P300/LPC. Thus, spreading activation to the semantics of lexical neighbors, indexed by the N400 density effect, appears to be reduced or delayed in adult aging. In contrast, effects of density on P300/LPC amplitude were present in both age groups, perhaps reflecting attentional allocation to items that resemble few words in the mental lexicon. The results constitute the first evidence that ERP effects of neighborhood density are affected by adult aging. The age difference may reflect either a unitary density effect that is delayed by approximately 150ms in older adults, or multiple processes that are differentially affected by aging. Copyright © 2016 Elsevier Ltd. All rights reserved.
Interaction in Spoken Word Recognition Models: Feedback Helps.
Magnuson, James S; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D
2018-01-01
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.
Interaction in Spoken Word Recognition Models: Feedback Helps
Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.
2018-01-01
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593
ERIC Educational Resources Information Center
Spolsky, Bernard; And Others
As part of a study of the feasibility and effect of teaching Navajo children to read their own language first, a word count collected by 22 Navajo adults interviewing over 200 Navajo 6-year-olds was undertaken. This report discusses the word count and the interview texts in terms of (1) number of sentences, (2) number of words, (3) number of…
A test of the orthographic recoding hypothesis
NASA Astrophysics Data System (ADS)
Gaygen, Daniel E.
2003-04-01
The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.
Developmental changes in the inferior frontal cortex for selecting semantic representations
Lee, Shu-Hui; Booth, James R.; Chen, Shiou-Yuan; Chou, Tai-Li
2012-01-01
Functional magnetic resonance imaging (fMRI) was used to examine the neural correlates of semantic judgments to Chinese words in a group of 10–15 year old Chinese children. Two semantic tasks were used: visual–visual versus visual–auditory presentation. The first word was visually presented (i.e. character) and the second word was either visually or auditorily presented, and the participant had to determine if these two words were related in meaning. Different from English, Chinese has many homophones in which each spoken word corresponds to many characters. The visual–auditory task, therefore, required greater engagement of cognitive control for the participants to select a semantically appropriate answer for the second homophonic word. Weaker association pairs produced greater activation in the mid-ventral region of left inferior frontal gyrus (BA 45) for both tasks. However, this effect was stronger for the visual–auditory task than for the visual–visual task and this difference was stronger for older compared to younger children. The findings suggest greater involvement of semantic selection mechanisms in the cross-modal task requiring the access of the appropriate meaning of homophonic spoken words, especially for older children. PMID:22337757
Do individuals with autism process words in context? Evidence from language-mediated eye-movements.
Brock, Jon; Norbury, Courtenay; Einav, Shiri; Nation, Kate
2008-09-01
It is widely argued that people with autism have difficulty processing ambiguous linguistic information in context. To investigate this claim, we recorded the eye-movements of 24 adolescents with autism spectrum disorder and 24 language-matched peers as they monitored spoken sentences for words corresponding to objects on a computer display. Following a target word, participants looked more at a competitor object sharing the same onset than at phonologically unrelated objects. This effect was, however, mediated by the sentence context such that participants looked less at the phonological competitor if it was semantically incongruous with the preceding verb. Contrary to predictions, the two groups evidenced similar effects of context on eye-movements. Instead, across both groups, the effect of sentence context was reduced in individuals with relatively poor language skills. Implications for the weak central coherence account of autism are discussed.
Is the Orthographic/Phonological Onset a Single Unit in Reading Aloud?
ERIC Educational Resources Information Center
Mousikou, Petroula; Coltheart, Max; Saunders, Steven; Yen, Lisa
2010-01-01
Two main theories of visual word recognition have been developed regarding the way orthographic units in printed words map onto phonological units in spoken words. One theory suggests that a string of single letters or letter clusters corresponds to a string of phonemes (Coltheart, 1978; Venezky, 1970), while the other suggests that a string of…
ERIC Educational Resources Information Center
Appel, Randy; Wood, David
2016-01-01
The correct use of frequently occurring word combinations represents an important part of language proficiency in spoken and written discourse. This study investigates the use of English-language recurrent word combinations in low-level and high-level L2 English academic essays sourced from the Canadian Academic English Language (CAEL) assessment.…
(Almost) Word for Word: As Voice Recognition Programs Improve, Students Reap the Benefits
ERIC Educational Resources Information Center
Smith, Mark
2006-01-01
Voice recognition software is hardly new--attempts at capturing spoken words and turning them into written text have been available to consumers for about two decades. But what was once an expensive and highly unreliable tool has made great strides in recent years, perhaps most recognized in programs such as Nuance's Dragon NaturallySpeaking…
Neighborhoods of Words in the Mental Lexicon. Research on Speech Perception. Technical Report No. 6.
ERIC Educational Resources Information Center
Luce, Paul A.
A study employed computational and experimental methods to address a number of issues related to the representation and structural organization of spoken words in the mental lexicon. Using a computerized lexicon consisting of phonetic transcriptions of 20,000 words, "similarity neighborhoods" for each of the transcriptions were computed.…
ERIC Educational Resources Information Center
Dobel, Christian; Junghofer, Markus; Breitenstein, Caterina; Klauke, Benedikt; Knecht, Stefan; Pantev, Christo; Zwitserlood, Pienie
2010-01-01
The plasticity of the adult memory network for integrating novel word forms (lexemes) was investigated with whole-head magnetoencephalography (MEG). We showed that spoken word forms of an (artificial) foreign language are integrated rapidly and successfully into existing lexical and conceptual memory networks. The new lexemes were learned in an…
Polling the effective neighborhoods of spoken words with the verbal transformation effect.
Bashford, James A; Warren, Richard M; Lenz, Peter W
2006-04-01
Studies of the effects of lexical neighbors upon the recognition of spoken words have generally assumed that the most salient competitors differ by a single phoneme. The present study employs a procedure that induces the listeners to perceive and call out the salient competitors. By presenting a recording of a monosyllable repeated over and over, perceptual adaptation is produced, and perception of the stimulus is replaced by perception of a competitor. Reports from groups of subjects were obtained for monosyllables that vary in their frequency-weighted neighborhood density. The findings are compared with predictions based upon the neighborhood activation model.
Similarity as an organising principle in short-term memory.
LeCompte, D C; Watkins, M J
1993-03-01
The role of stimulus similarity as an organising principle in short-term memory was explored in a series of seven experiments. Each experiment involved the presentation of a short sequence of items that were drawn from two distinct physical classes and arranged such that item class changed after every second item. Following presentation, one item was re-presented as a probe for the 'target' item that had directly followed it in the sequence. Memory for the sequence was considered organised by class if probability of recall was higher when the probe and target were from the same class than when they were from different classes. Such organisation was found when one class was auditory and the other was visual (spoken vs. written words, and sounds vs. pictures). It was also found when both classes were auditory (words spoken in a male voice vs. words spoken in a female voice) and when both classes were visual (digits shown in one location vs. digits shown in another). It is concluded that short-term memory can be organised on the basis of sensory modality and on the basis of certain features within both the auditory and visual modalities.
Word Processing in Children With Autism Spectrum Disorders: Evidence From Event-Related Potentials.
Sandbank, Micheal; Yoder, Paul; Key, Alexandra P
2017-12-20
This investigation was conducted to determine whether young children with autism spectrum disorders exhibited a canonical neural response to word stimuli and whether putative event-related potential (ERP) measures of word processing were correlated with a concurrent measure of receptive language. Additional exploratory analyses were used to examine whether the magnitude of the association between ERP measures of word processing and receptive language varied as a function of the number of word stimuli the participants reportedly understood. Auditory ERPs were recorded in response to spoken words and nonwords presented with equal probability in 34 children aged 2-5 years with a diagnosis of autism spectrum disorder who were in the early stages of language acquisition. Average amplitudes and amplitude differences between word and nonword stimuli within 200-500 ms were examined at left temporal (T3) and parietal (P3) electrode clusters. Receptive vocabulary size and the number of experimental stimuli understood were concurrently measured using the MacArthur-Bates Communicative Development Inventories. Across the entire participant group, word-nonword amplitude differences were diminished. The average word-nonword amplitude difference at T3 was related to receptive vocabulary only if 5 or more word stimuli were understood. If ERPs are to ever have clinical utility, their construct validity must be established by investigations that confirm their associations with predictably related constructs. These results contribute to accruing evidence, suggesting that a valid measure of auditory word processing can be derived from the left temporal response to words and nonwords. In addition, this measure can be useful even for participants who do not reportedly understand all of the words presented as experimental stimuli, though it will be important for researchers to track familiarity with word stimuli in future investigations. https://doi.org/10.23641/asha.5614840.
Using Key Part-of-Speech Analysis to Examine Spoken Discourse by Taiwanese EFL Learners
ERIC Educational Resources Information Center
Lin, Yen-Liang
2015-01-01
This study reports on a corpus analysis of samples of spoken discourse between a group of British and Taiwanese adolescents, with the aim of exploring the statistically significant differences in the use of grammatical categories between the two groups of participants. The key word method extended to a part-of-speech level using the web-based…
Recombinative generalization of within-syllable units in nonreading adults with mental retardation.
Saunders, Kathryn J; O'Donnell, Jennifer; Vaidya, Manish; Williams, Dean C
2003-01-01
Two adults with mental retardation demonstrated the recombination of within-syllable units (onsets and rimes) using a spoken-to-printed-word matching-to-sample (MTS) procedure. Further testing with 1 participant showed comprehension of the printed words. Printed-word naming was minimal before, but greater after, comprehension tests. The findings suggest that these procedures hold promise for further basic and applied analyses of word-attack skills.
The Self-Organization of a Spoken Word
Holden, John G.; Rajaraman, Srinivasan
2012-01-01
Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213
Writing academic papers: lost in translation?
Grant, Maria J
2011-12-01
The process of writing for publication is a challenging one. It moves us from the spoken and written word into a realm that requires us to provide supporting evidence to develop an argument in a logical and progressive way. In English language journals, as elsewhere, the quality of the written word is a determining factor in the likelihood of a paper being accepted for publication. By reading past issues of a targeted journal, drawing on the expertise of colleagues and responding positively to feedback, it is possible to significantly enhance your prospects of publication. © 2011 The authors. Health Information and Libraries Journal © 2011 Health Libraries Group.
Rapp, B; Caramazza, A
1997-02-01
We describe the case of a brain-damaged individual whose speech is characterized by difficulty with practically all words except for elements of the closed class vocabulary. In contrast, his written sentence production exhibits a complementary impairment involving the omission of closed class vocabulary items and the relative sparing of nouns. On the basis of these differences we argue: (1) that grammatical categories constitute an organizing parameter of representation and/or processing for each of the independent, modality-specific lexicons, and (2) that these observations contribute to the growing evidence that access to the orthographic and phonological forms of words can occur independently.
Newly learned word forms are abstract and integrated immediately after acquisition
Kapnoula, Efthymia C.; McMurray, Bob
2015-01-01
A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35–39, 2007; Gaskell & Dumay, Cognition, 89, 105–132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85–99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation. PMID:26202702
ERIC Educational Resources Information Center
Karins, A. Krisjanis
1995-01-01
Investigates variable deletion of short vowels in word-final unstressed syllables in Latvian spoken in Riga. Affected vowels were almost always inflectional endings and results indicated that internal phonological and prosodic factors (especially distance from main word stress) were the strongest constraints on vowel deletion, along with the…
Thaut, Michael H.; Peterson, David A.; McIntosh, Gerald C.; Hoemberg, Volker
2014-01-01
Recent research on music and brain function has suggested that the temporal pattern structure in music and rhythm can enhance cognitive functions. To further elucidate this question specifically for memory, we investigated if a musical template can enhance verbal learning in patients with multiple sclerosis (MS) and if music-assisted learning will also influence short-term, system-level brain plasticity. We measured systems-level brain activity with oscillatory network synchronization during music-assisted learning. Specifically, we measured the spectral power of 128-channel electroencephalogram (EEG) in alpha and beta frequency bands in 54 patients with MS. The study sample was randomly divided into two groups, either hearing a spoken or a musical (sung) presentation of Rey’s auditory verbal learning test. We defined the “learning-related synchronization” (LRS) as the percent change in EEG spectral power from the first time the word was presented to the average of the subsequent word encoding trials. LRS differed significantly between the music and the spoken conditions in low alpha and upper beta bands. Patients in the music condition showed overall better word memory and better word order memory and stronger bilateral frontal alpha LRS than patients in the spoken condition. The evidence suggests that a musical mnemonic recruits stronger oscillatory network synchronization in prefrontal areas in MS patients during word learning. It is suggested that the temporal structure implicit in musical stimuli enhances “deep encoding” during verbal learning and sharpens the timing of neural dynamics in brain networks degraded by demyelination in MS. PMID:24982626
ERIC Educational Resources Information Center
Casini, Laurence; Burle, Boris; Nguyen, Noel
2009-01-01
Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control…
ERIC Educational Resources Information Center
Patton-Terry, Nicole; Connor, Carol
2010-01-01
This study explored the spelling skills of African American second graders who produced African American English (AAE) features in speech. The children (N = 92), who varied in spoken AAE use and word reading skills, were asked to spell words that contained phonological and morphological dialect-sensitive (DS) features that can vary between AAE and…
Young Children's Knowledge of the Symbolic Nature of Writing
ERIC Educational Resources Information Center
Treiman, Rebecca; Hompluem, Lana; Gordon, Jessica; Decker, Kristina; Markson, Lori
2016-01-01
Two experiments with one hundred and fourteen 3- to 5-year-old children examined whether children understand that a printed word represents a specific spoken word and that it differs in this way from a drawing. When an experimenter read a word to children and then a puppet used a different but related label for it, such as "dog" for the…
ERIC Educational Resources Information Center
Elbro, Carsten; And Others
1994-01-01
Compared to controls, adults (n=102) who reported a history of difficulties in learning to read were disabled in phonological coding, but less disabled in reading comprehension. Adults with poor phonological coding skills had basic deficits in phonological representations of spoken words, even when semantic word knowledge, phonemic awareness,…
Breining, Bonnie; Nozari, Nazbanou; Rapp, Brenda
2016-04-01
Past research has demonstrated interference effects when words are named in the context of multiple items that share a meaning. This interference has been explained within various incremental learning accounts of word production, which propose that each attempt at mapping semantic features to lexical items induces slight but persistent changes that result in cumulative interference. We examined whether similar interference-generating mechanisms operate during the mapping of lexical items to segments by examining the production of words in the context of others that share segments. Previous research has shown that initial-segment overlap amongst a set of target words produces facilitation, not interference. However, this initial-segment facilitation is likely due to strategic preparation, an external factor that may mask underlying interference. In the present study, we applied a novel manipulation in which the segmental overlap across target items was distributed unpredictably across word positions, in order to reduce strategic response preparation. This manipulation led to interference in both spoken (Exp. 1) and written (Exp. 2) production. We suggest that these findings are consistent with a competitive learning mechanism that applies across stages and modalities of word production.
Processing Electromyographic Signals to Recognize Words
NASA Technical Reports Server (NTRS)
Jorgensen, C. C.; Lee, D. D.
2009-01-01
A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.
Bhatarah, Parveen; Ward, Geoff; Tan, Lydia
2006-03-01
In 3 experiments, participants saw lists of 16 words for free recall with or without a 6-digit immediate serial recall (ISR) task after each word. Free recall was performed under standard visual silent and spoken-aloud conditions (Experiment 1), overt rehearsal conditions (Experiment 2), and fixed rehearsal conditions (Experiment 3). The authors found that in each experiment, there was no effect of ISR on the magnitude of the recency effect, but interleaved ISR disrupted free recall of those words that would otherwise be rehearsed. The authors conclude that ISR and recency cannot both be outputs from a unitary limited-capacity short-term memory store and discuss the possibility that the process of rehearsal may be common to both tasks.
Cascading activation from lexical processing to letter-level processing in written word production.
Buchwald, Adam; Falconer, Carolyn
2014-01-01
Descriptions of language production have identified processes involved in producing language and the presence and type of interaction among those processes. In the case of spoken language production, consensus has emerged that there is interaction among lexical selection processes and phoneme-level processing. This issue has received less attention in written language production. In this paper, we present a novel analysis of the writing-to-dictation performance of an individual with acquired dysgraphia revealing cascading activation from lexical processing to letter-level processing. The individual produced frequent lexical-semantic errors (e.g., chipmunk → SQUIRREL) as well as letter errors (e.g., inhibit → INBHITI) and had a profile consistent with impairment affecting both lexical processing and letter-level processing. The presence of cascading activation is suggested by lower letter accuracy on words that are more weakly activated during lexical selection than on those that are more strongly activated. We operationalize weakly activated lexemes as those lexemes that are produced as lexical-semantic errors (e.g., lethal in deadly → LETAHL) compared to strongly activated lexemes where the intended target word (e.g., lethal) is the lexeme selected for production.
Schreibman, Laura; Stahmer, Aubyn C
2014-05-01
Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.
ERIC Educational Resources Information Center
Wagner, Monica; Shafer, Valerie L.; Haxhari, Evis; Kiprovski, Kevin; Behrmann, Katherine; Griffiths, Tara
2017-01-01
Purpose: Atypical cortical sensory waveforms reflecting impaired encoding of auditory stimuli may result from inconsistency in cortical response to the acoustic feature changes within spoken words. Thus, the present study assessed intrasubject stability of the P1-N1-P2 complex and T-complex to multiple productions of spoken nonwords in 48 adults…
Word Length and Lexical Activation: Longer Is Better
ERIC Educational Resources Information Center
Pitt, Mark A.; Samuel, Arthur G.
2006-01-01
Many models of spoken word recognition posit the existence of lexical and sublexical representations, with excitatory and inhibitory mechanisms used to affect the activation levels of such representations. Bottom-up evidence provides excitatory input, and inhibition from phonetically similar representations leads to lexical competition. In such a…
Measuring Syntactic Complexity in Spontaneous Spoken Swedish
ERIC Educational Resources Information Center
Roll, Mikael; Frid, Johan; Horne, Merle
2007-01-01
Hesitation disfluencies after phonetically prominent stranded function words are thought to reflect the cognitive coding of complex structures. Speech fragments following the Swedish function word "att" "that" were analyzed syntactically, and divided into two groups: one with "att" in disfluent contexts, and the other with "att" in fluent…
Young, Victoria; Rochon, Elizabeth; Mihailidis, Alex
2016-11-14
The purpose of this study was to derive data from real, recorded, personal emergency response call conversations to help improve the artificial intelligence and decision making capability of a spoken dialogue system in a smart personal emergency response system. The main study objectives were to: develop a model of personal emergency response; determine categories for the model's features; identify and calculate measures from call conversations (verbal ability, conversational structure, timing); and examine conversational patterns and relationships between measures and model features applicable for improving the system's ability to automatically identify call model categories and predict a target response. This study was exploratory and used mixed methods. Personal emergency response calls were pre-classified according to call model categories identified qualitatively from response call transcripts. The relationships between six verbal ability measures, three conversational structure measures, two timing measures and three independent factors: caller type, risk level, and speaker type, were examined statistically. Emergency medical response services were the preferred response for the majority of medium and high risk calls for both caller types. Older adult callers mainly requested non-emergency medical service responders during medium risk situations. By measuring the number of spoken words-per-minute and turn-length-in-words for the first spoken utterance of a call, older adult and care provider callers could be identified with moderate accuracy. Average call taker response time was calculated using the number-of-speaker-turns and time-in-seconds measures. Care providers and older adults used different conversational strategies when responding to call takers. The words 'ambulance' and 'paramedic' may hold different latent connotations for different callers. The data derived from the real personal emergency response recordings may help a spoken dialogue system classify incoming calls by caller type with moderate probability shortly after the initial caller utterance. Knowing the caller type, the target response for the call may be predicted with some degree of probability and the output dialogue could be tailored to this caller type. The average call taker response time measured from real calls may be used to limit the conversation length in a spoken dialogue system before defaulting to a live call taker.
Method for automatic measurement of second language speaking proficiency
NASA Astrophysics Data System (ADS)
Bernstein, Jared; Balogh, Jennifer
2005-04-01
Spoken language proficiency is intuitively related to effective and efficient communication in spoken interactions. However, it is difficult to derive a reliable estimate of spoken language proficiency by situated elicitation and evaluation of a person's communicative behavior. This paper describes the task structure and scoring logic of a group of fully automatic spoken language proficiency tests (for English, Spanish and Dutch) that are delivered via telephone or Internet. Test items are presented in spoken form and require a spoken response. Each test is automatically-scored and primarily based on short, decontextualized tasks that elicit integrated listening and speaking performances. The tests present several types of tasks to candidates, including sentence repetition, question answering, sentence construction, and story retelling. The spoken responses are scored according to the lexical content of the response and a set of acoustic base measures on segments, words and phrases, which are scaled with IRT methods or parametrically combined to optimize fit to human listener judgments. Most responses are isolated spoken phrases and sentences that are scored according to their linguistic content, their latency, and their fluency and pronunciation. The item development procedures and item norming are described.
Yoncheva, Yuliya; Maurer, Urs; Zevin, Jason D; McCandliss, Bruce D
2014-08-15
Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may play in phonological awareness impairments thought to underlie developmental reading disabilities. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Yoncheva; Maurer, Urs; Zevin, Jason; McCandliss, Bruce
2015-01-01
Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective atten tion to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by ma nipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data- driven source localization analyses revealed that selective attention to phonology led to significantly greater re cruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings support the key role of selective attention to phonology in the development of literacy and motivate future research on the neural bases of the interaction between phonological awareness and literacy, deemed central to both typical and atypical reading development. PMID:24746955
Williams, Joshua T; Newman, Sharlene D
2017-02-01
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.
Vocal Interaction between Children with Down syndrome and their Parents
Thiemann-Bourque, Kathy S.; Warren, Steven F.; Brady, Nancy; Gilkerson, Jill; Richards, Jeffrey A.
2014-01-01
Purpose The purpose of this study was to describe differences in parent input and child vocal behaviors of children with Down syndrome (DS) compared to typically developing (TD) children. The goals were to describe the language learning environments at distinctly different ages in early childhood. Method Nine children with DS and 9 age-matched TD children participated; four children in each group were ages 9–11 months and five were between 25–54 months. Measures were derived from automated vocal analysis. A digital language processer measured the richness of the child’s language environment, including number of adult words, conversational turns, and child vocalizations. Results Analyses indicated no significant differences in words spoken by parents of younger vs. older children with DS, and significantly more words spoken by parents of TD children than parents of children with DS. Differences between the DS and TD groups were observed in rates of all vocal behaviors; with no differences noted between the younger vs. older children with DS, and the younger TD children did not vocalize significantly more than the younger DS children. Conclusions Parents of children with DS continue to provide consistent levels of input across the early language learning years; however, child vocal behaviors remain low after the age of 24 months suggesting the need for additional and alternative intervention approaches. PMID:24686777
A Positivity Bias in Written and Spoken English and Its Moderation by Personality and Gender.
Augustine, Adam A; Mehl, Matthias R; Larsen, Randy J
2011-09-01
The human tendency to use positive words ("adorable") more often than negative words ("dreadful") is called the linguistic positivity bias. We find evidence for this bias in two studies of word use, one based on written corpora and another based on naturalistic speech samples. In addition, we demonstrate that the positivity bias applies to nouns and verbs as well as adjectives. We also show that it is found to the same degree in written as well as spoken English. Moreover, personality traits and gender moderate the effect, such that persons high on extraversion and agreeableness and women display a larger positivity bias in naturalistic speech. Results are discussed in terms of how the linguistic positivity bias may serve as a mechanism for social facilitation. People, in general, and some people more than others, tend to talk about the brighter side of life.
Higgins, Irina; Stringer, Simon; Schnupp, Jan
2017-01-01
The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.
School-aged children can benefit from audiovisual semantic congruency during memory encoding.
Heikkilä, Jenni; Tiippana, Kaisa
2016-05-01
Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.
Stringer, Simon
2017-01-01
The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable. PMID:28797034
Syntactic Predictability in the Recognition of Carefully and Casually Produced Speech
ERIC Educational Resources Information Center
Viebahn, Malte C.; Ernestus, Mirjam; McQueen, James M.
2015-01-01
The present study investigated whether the recognition of spoken words is influenced by how predictable they are given their syntactic context and whether listeners assign more weight to syntactic predictability when acoustic-phonetic information is less reliable. Syntactic predictability was manipulated by varying the word order of past…
Words Spoken by Teachers to Primary-Level Classes of Deaf Children.
ERIC Educational Resources Information Center
Stuckless, E. Ross; Miller, Linda D.
1987-01-01
The study generated a list of the 1000 most frequently used words by teachers of hearing impaired children in six primary grade classes. Results have implications for real time captioning systems of communication. An alphabetical list and a list ordered by frequency of use are appended. (DB)
Prosody Production and Perception with Conversational Speech
ERIC Educational Resources Information Center
Mo, Yoonsook
2010-01-01
Speech utterances are more than the linear concatenation of individual phonemes or words. They are organized by prosodic structures comprising phonological units of different sizes (e.g., syllable, foot, word, and phrase) and the prominence relations among them. As the linguistic structure of spoken languages, prosody serves an important function…
The Mental Lexicon Is Fully Specified: Evidence from Eye-Tracking
ERIC Educational Resources Information Center
Mitterer, Holger
2011-01-01
Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input…
Effects of Referent Token Variability on L2 Vocabulary Learning
ERIC Educational Resources Information Center
Sommers, Mitchell S.; Barcroft, Joe
2013-01-01
Previous research has demonstrated substantially improved second language (L2) vocabulary learning when spoken word forms are varied using multiple talkers, speaking styles, or speaking rates. In contrast, the present study varied visual representations of referents for target vocabulary. English speakers learned Spanish words in formats of no…
A Study of Semantic Features: Electrophysiological Correlates.
ERIC Educational Resources Information Center
Wetzel, Frederick; And Others
This study investigates whether words differing in a single contrastive semantic feature (positive/negative) can be discriminated by auditory evoked responses (AERs). Ten right-handed college students were provided with auditory stimuli consisting of 20 relational words (more/less; high/low, etc.) spoken with a middle American accent and computer…
Abstract Graphemic Representations Support Preparation of Handwritten Responses
ERIC Educational Resources Information Center
Shen, Xingjia Rachel; Damian, Marcus F.; Stadthagen-Gonzalez, Hans
2013-01-01
Some evidence suggests that the written production of single words involves not only the ordered retrieval of individual letters, but that abstract, higher-level linguistic properties of the words also influence responses. We report five experiments using the "implicit priming" task adopted from the spoken domain to investigate response…
Tracing Attention and the Activation Flow of Spoken Word Planning Using Eye Movements
ERIC Educational Resources Information Center
Roelofs, Ardi
2008-01-01
The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements and naming latencies were recorded. The…
ERIC Educational Resources Information Center
Glaus, Marlene
The activities presented in this book, designed to help children translate their thoughts into spoken and written words, can supplement an elementary teacher's own language arts lessons. Objectives for each activity are listed, with the general focus of the many oral activities being to develop a rich verbal background for future written work. The…
ERIC Educational Resources Information Center
Gogate, Lakshmi J.; Bolzani, Laura H.; Betancourt, Eugene A.
2006-01-01
We examined whether mothers' use of temporal synchrony between spoken words and moving objects, and infants' attention to object naming, predict infants' learning of word-object relations. Following 5 min of free play, 24 mothers taught their 6- to 8-month-olds the names of 2 toy objects, "Gow" and "Chi," during a 3-min play…
Sutton, Ann; Trudeau, Natacha; Morford, Jill; Rios, Monica; Poirier, Marie-Andrée
2010-01-01
Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression (visual). This study explored the ability of three- and four-year-old children without disabilities to perform tasks involving sequences of graphic symbols. Thirty participants were asked to transpose spoken simple sentences into graphic symbols by selecting individual symbols corresponding to the spoken words, and to interpret graphic symbol utterances by selecting one of four photographs corresponding to a sequence of three graphic symbols. The results showed that these were not simple tasks for the participants, and few of them performed in the expected manner - only one in transposition, and only one-third of participants in interpretation. Individual response strategies in some cases lead to contrasting response patterns. Children at this age level have not yet developed the skills required to deal with graphic symbols even though they have mastered the corresponding spoken language structures.
Affective Congruence between Sound and Meaning of Words Facilitates Semantic Decision.
Aryani, Arash; Jacobs, Arthur M
2018-05-31
A similarity between the form and meaning of a word (i.e., iconicity) may help language users to more readily access its meaning through direct form-meaning mapping. Previous work has supported this view by providing empirical evidence for this facilitatory effect in sign language, as well as for onomatopoetic words (e.g., cuckoo) and ideophones (e.g., zigzag). Thus, it remains largely unknown whether the beneficial role of iconicity in making semantic decisions can be considered a general feature in spoken language applying also to "ordinary" words in the lexicon. By capitalizing on the affective domain, and in particular arousal, we organized words in two distinctive groups of iconic vs. non-iconic based on the congruence vs. incongruence of their lexical (meaning) and sublexical (sound) arousal. In a two-alternative forced choice task, we asked participants to evaluate the arousal of printed words that were lexically either high or low arousing. In line with our hypothesis, iconic words were evaluated more quickly and more accurately than their non-iconic counterparts. These results indicate a processing advantage for iconic words, suggesting that language users are sensitive to sound-meaning mappings even when words are presented visually and read silently.
Attention Demands of Spoken Word Planning: A Review
Roelofs, Ardi; Piai, Vitória
2011-01-01
Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot proceed without paying some form of attention. Here, we review evidence that word planning requires some but not full attention. The evidence comes from chronometric studies of word planning in picture naming and word reading under divided attention conditions. It is generally assumed that the central attention demands of a process are indexed by the extent that the process delays the performance of a concurrent unrelated task. The studies measured the speed and accuracy of linguistic and non-linguistic responding as well as eye gaze durations reflecting the allocation of attention. First, empirical evidence indicates that in several task situations, processes up to and including phonological encoding in word planning delay, or are delayed by, the performance of concurrent unrelated non-linguistic tasks. These findings suggest that word planning requires central attention. Second, empirical evidence indicates that conflicts in word planning may be resolved while concurrently performing an unrelated non-linguistic task, making a task decision, or making a go/no-go decision. These findings suggest that word planning does not require full central attention. We outline a computationally implemented theory of attention and word planning, and describe at various points the outcomes of computer simulations that demonstrate the utility of the theory in accounting for the key findings. Finally, we indicate how attention deficits may contribute to impaired language performance, such as in individuals with specific language impairment. PMID:22069393
Song Perception by Professional Singers and Actors: An MEG Study
Rosslau, Ken; Herholz, Sibylle C.; Knief, Arne; Ortmann, Magdalene; Deuster, Dirk; Schmidt, Claus-Michael; Zehnhoff-Dinnesen, Antoinetteam; Pantev, Christo; Dobel, Christian
2016-01-01
The cortical correlates of speech and music perception are essentially overlapping, and the specific effects of different types of training on these networks remain unknown. We compared two groups of vocally trained professionals for music and speech, singers and actors, using recited and sung rhyme sequences from German art songs with semantic and/ or prosodic/melodic violations (i.e. violations of pitch) of the last word, in order to measure the evoked activation in a magnetoencephalographic (MEG) experiment. MEG data confirmed the existence of intertwined networks for the sung and spoken modality in an early time window after word violation. In essence for this early response, higher activity was measured after melodic/prosodic than semantic violations in predominantly right temporal areas. For singers as well as for actors, modality-specific effects were evident in predominantly left-temporal lateralized activity after semantic expectancy violations in the spoken modality, and right-dominant temporal activity in response to melodic violations in the sung modality. As an indication of a special group-dependent audiation process, higher neuronal activity for singers appeared in a late time window in right temporal and left parietal areas, both after the recited and the sung sequences. PMID:26863437
Caplan, David; Michaud, Jennifer; Hufford, Rebecca
2015-01-01
Sixty-one people with aphasia (pwa) and 41 matched controls were tested for the ability to understand sentences that required the ability to process particular syntactic elements and assign particular syntactic structures. Participants paced themselves word-by-word through 20 examples of 11 spoken sentence types and indicated which of two pictures corresponded to the meaning of each sentence. Sentences were developed in pairs such that comprehension of the experimental version of a pair required an aspect of syntactic processing not required in the corresponding baseline sentence. The need for the syntactic operations required only in the experimental version was triggered at a "critical word" in the experimental sentence. Listening times for critical words in experimental sentences were compared to those for corresponding words in the corresponding baseline sentences. The results were consistent with several models of syntactic comprehension deficits in pwa: resource reduction, slowed lexical and/or syntactic processing, abnormal susceptibility to interference from thematic roles generated non-syntactically. They suggest that a previously unidentified disturbance limiting the duration of parsing and interpretation may lead to these deficits, and that this mechanism may lead to structure-specific deficits in pwa. The results thus point to more than one mechanism underlying syntactic comprehension disorders both across and within pwa.
Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.
Douglas, Michael
2016-02-01
To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently and significantly improve the achievement of children with hearing loss in spoken language skills.
Larraza, Saioa; Samuel, Arthur G; Oñederra, Miren Lourdes
2016-07-20
Accented speech has been seen as an additional impediment for speech processing; it usually adds linguistic and cognitive load to the listener's task. In the current study we analyse where the processing costs of regional dialects come from, a question that has not been answered yet. We quantify the proficiency of Basque-Spanish bilinguals who have different native dialects of Basque on many dimensions and test for costs at each of three levels of processing-phonemic discrimination, word recognition, and semantic processing. The ability to discriminate a dialect-specific contrast is affected by a bilingual's linguistic background less than lexical access is, and an individual's difficulty in lexical access is correlated with basic discrimination problems. Once lexical access is achieved, dialectal variation has little impact on semantic processing. The results are discussed in terms of the presence or absence of correlations between different processing levels. The implications of the results are considered for how models of spoken word recognition handle dialectal variation.
Tone of voice guides word learning in informative referential contexts.
Reinisch, Eva; Jesse, Alexandra; Nygaard, Lynne C
2013-06-01
Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., "daxen") spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.
The relationship between two visual communication systems: reading and lipreading.
Williams, A
1982-12-01
To explore the relationship between reading and lipreading and to determine whether readers and lipreaders use similar strategies to comprehend verbal messages, 60 female junior and sophomore high school students--30 good and 30 poor readers--were given a filmed lipreading test, a test to measure eye-voice span, a test of cloze ability, and a test of their ability to comprehend printed material presented one word at a time in the absence of an opportunity to regress or scan ahead. The results of this study indicated that (a) there is a significant relationship between reading and lipreading ability; (b) although good readers may be either good or poor lipreaders, poor readers are more likely to be poor than good lipreaders; (c) there are similarities in the strategies used by readers and lipreaders in their approach to comprehending spoken and written material; (d) word-by-word reading of continuous prose appears to be a salient characteristic of both poor reading and poor lipreading ability; and (c) good readers and lipreaders do not engage in word-by-word reading but rather use a combination of visual and linguistic cues to interpret written and spoken messages.
Individual Differences in the Real-Time Comprehension of Children with ASD
Venker, Courtney E.; Eernisse, Elizabeth R.; Saffran, Jenny R.; Weismer, Susan Ellis
2013-01-01
Lay Abstract Spoken language processing is related to language and cognitive skills in typically developing children, but very little is known about how children with autism spectrum disorders (ASD) comprehend words in real time. Studying this area is important because it may help us understand why many children with autism have delayed language comprehension. Thirty-four children with ASD (3–6 years old) participated in this study. They took part in a language comprehension task that involved looking at pictures on a screen and listening to questions about familiar nouns (e.g., Where’s the shoe?). Children as a group understood the familiar words, but accuracy and processing speed varied considerably across children. The children who were more accurate were also faster to process the familiar words. Children’s language processing accuracy was related to processing speed and language comprehension on a standardized test; nonverbal cognition did not explain additional information after accounting for these factors. Additionally, lexical processing accuracy at age 5½ was related to children’s vocabulary comprehension three years earlier, at age 2½. Autism severity and years of maternal education were unrelated to language processing. Words typically acquired earlier in life were processed more quickly than words acquired later. These findings point to similarities in patterns of language development in typically developing children and children with ASD. Studying real-time comprehension in children with ASD may help us better understand mechanisms of language comprehension in this population. Future work may help explain why some children with ASD develop age-appropriate language skills, whereas others experience lasting deficits. Scientific Abstract Many children with autism spectrum disorders (ASD) demonstrate deficits in language comprehension, but little is known about how they process spoken language as it unfolds. Real-time lexical comprehension is associated with language and cognition in children without ASD, suggesting that this may also be the case for children with ASD. This study adopted an individual differences approach to characterizing real-time comprehension of familiar words in a group of 34 three- to six-year-olds with ASD. The looking-while-listening paradigm was employed; it measures online accuracy and latency through language-mediated eye movements and has limited task demands. On average, children demonstrated comprehension of the familiar words, but considerable variability emerged. Children with better accuracy were faster to process the familiar words. In combination, processing speed and comprehension on a standardized language assessment explained 63% of the variance in online accuracy. Online accuracy was not correlated with autism severity or maternal education, and nonverbal cognition did not explain unique variance. Notably, online accuracy at age 5½ was related to vocabulary comprehension three years earlier. The words typically learned earliest in life were processed most quickly. Consistent with a dimensional view of language abilities, these findings point to similarities in patterns of language acquisition in typically developing children and those with ASD. Overall, our results emphasize the value of examining individual differences in real-time language comprehension in this population. We propose that the looking-while-listening paradigm is a sensitive and valuable methodological tool that can be applied across many areas of autism research. PMID:23696214
Lexical integration of novel words without sleep.
Lindsay, Shane; Gaskell, M Gareth
2013-03-01
Learning a new word involves integration with existing lexical knowledge. Previous work has shown that sleep-associated memory consolidation processes are important for the engagement of novel items in lexical competition. In 3 experiments we used spaced exposure regimes to investigate memory for novel words and whether lexical integration can occur within a single day. The degree to which a new spoken word (e.g., cathedruke) engaged in lexical competition with established phonological neighbors (e.g., cathedral) was employed as a marker for lexical integration. We found evidence for improvements in recognition and cued recall following a time period including sleep, but we also found lexical competition effects emerging within a single day. Spaced exposure to novel words on its own did not bring about this within-day lexical competition effect (Experiment 2), which instead occurred with either spaced or massed exposure to novel words, provided that there was also spaced exposure to the phonological neighbors (Experiments 1 and 3). Although previous studies have indicated that sleep-dependent memory consolidation may be sufficient for lexical integration, our results show it is not a necessary precondition. (c) 2013 APA, all rights reserved.
The language machine: psycholinguistics in review.
Altmann GTM
2001-02-01
Psycholinguistics is the empirical and theoretical study of the mental faculty that underpins our consummate linguistic agility. This review takes a broad look at how the field has developed, from the turn of the 20th century through to the turn of the 21st. Since the linguistic revolution of the mid-1960s, the field has broadened to encompass a wide range of topics and disciplines. A selection of these is reviewed here, starting with a brief overview of the origins of psycholinguistics. More detailed sections describe the language abilities of newborn infants; infants' later abilities as they acquire their first words and develop their first grammatical skills; the representation and access of words (both spoken and written) in the mental lexicon; the representations and processes implicated in sentence processing and discourse comprehension; and finally, the manner in which, as we speak, we produce words and sentences. Psycholinguistics is as much about the study of the human mind itself as it is about the study of that mind's ability to communicate and comprehend.
The Dialogue of Spoken Word and Written Word.
ERIC Educational Resources Information Center
Skidmore, David W.
This paper presents and analyzes two examples of classroom discourse which belong to the genre of "talk about texts." Both are extracts from discussions between a small group of primary school students and their teacher (in England) on the topic of short texts of narrative fiction which they have just read together during the…
Cortical Mechanisms of Speech Perception in Noise
ERIC Educational Resources Information Center
Wong, Patrick C. M.; Uppunda, Ajith K.; Parrish, Todd B.; Dhar, Sumitrajit
2008-01-01
Purpose: The present study examines the brain basis of listening to spoken words in noise, which is a ubiquitous characteristic of communication, with the focus on the dorsal auditory pathway. Method: English-speaking young adults identified single words in 3 listening conditions while their hemodynamic response was measured using fMRI: speech in…
Early Action and Gesture "Vocabulary" and Its Relation with Word Comprehension and Production
ERIC Educational Resources Information Center
Caselli, Maria Cristina; Rinaldi, Pasquale; Stefanini, Silvia; Volterra, Virginia
2012-01-01
Data from 492 Italian infants (8-18 months) were collected with the parental questionnaire MacArthur Bates Communicative Development Inventories to describe early actions and gestures (A-G) "vocabulary" and its relation with spoken vocabulary in both comprehension and production. A-G were more strongly correlated with word comprehension…
Lexical and Metrical Stress in Word Recognition: Lexical or Pre-Lexical Influences?
ERIC Educational Resources Information Center
Slowiaczek, Louisa M.; Soltano, Emily G.; Bernstein, Hilary L.
2006-01-01
The influence of lexical stress and/or metrical stress on spoken word recognition was examined. Two experiments were designed to determine whether response times in lexical decision or shadowing tasks are influenced when primes and targets share lexical stress patterns (JUVenile-BIBlical [Syllables printed in capital letters indicate those…
Verbal Word Choice of Effective Reading Teachers
ERIC Educational Resources Information Center
Moran, Kelly A.
2013-01-01
Humans are fragile beings easily influenced by the verbal behaviors of others. Spoken words can have a multitude of effects on an individual, and the phrases and statements teachers use in their classrooms on a daily basis have the potential to be either detrimental or inspirational. As increasing numbers of students arrive at schools from broken…
Time-Driven Effects on Parsing during Reading
ERIC Educational Resources Information Center
Roll, Mikael; Lindgren, Magnus; Alter, Kai; Horne, Merle
2012-01-01
The phonological trace of perceived words starts fading away in short-term memory after a few seconds. Spoken utterances are usually 2-3 s long, possibly to allow the listener to parse the words into coherent prosodic phrases while they still have a clear representation. Results from this brain potential study suggest that even during silent…
Spoken Words. Technical Report No. 177.
ERIC Educational Resources Information Center
Hall, William S.; And Others
The word frequency lists presented in this publication were compiled to create a database for further research into vocabulary use, especially the variation in vocabulary due to differences in situation and social group membership. Taken from the natural conversations of 40 target children (four and a half to five years old) with their families,…
Muris, Peter; Hendriks, Eline; Bot, Suili
2016-02-01
Children with selective mutism (SM) fail to speak in specific public situations (e.g., school), despite speaking normally in other situations (e.g., at home). The current study explored the phenomenon of SM in a sample of 57 non-clinical children aged 3-6 years. Children performed two speech tasks to assess their absolute amount of spoken words, while their parents completed questionnaires for measuring children's levels of SM, social anxiety and non-social anxiety symptoms as well as the temperament characteristic of behavioral inhibition. The results indicated that high levels of parent-reported SM were primarily associated with high levels of social anxiety symptoms. The number of spoken words was negatively related to behavioral inhibition: children with a more inhibited temperament used fewer words during the speech tasks. Future research is necessary to test whether the temperament characteristic of behavioral inhibition prompts children to speak less in novel social situations, and whether it is mainly social anxiety that turns this taciturnity into the psychopathology of SM.
Relation between brain activation and lexical performance.
Booth, James R; Burman, Douglas D; Meyer, Joel R; Gitelman, Darren R; Parrish, Todd B; Mesulam, M Marsel
2003-07-01
Functional magnetic resonance imaging (fMRI) was used to determine whether performance on lexical tasks was correlated with cerebral activation patterns. We found that such relationships did exist and that their anatomical distribution reflected the neurocognitive processing routes required by the task. Better performance on intramodal tasks (determining if visual words were spelled the same or if auditory words rhymed) was correlated with more activation in unimodal regions corresponding to the modality of sensory input, namely the fusiform gyrus (BA 37) for written words and the superior temporal gyrus (BA 22) for spoken words. Better performance in tasks requiring cross-modal conversions (determining if auditory words were spelled the same or if visual words rhymed), on the other hand, was correlated with more activation in posterior heteromodal regions, including the supramarginal gyrus (BA 40) and the angular gyrus (BA 39). Better performance in these cross-modal tasks was also correlated with greater activation in unimodal regions corresponding to the target modality of the conversion process (i.e., fusiform gyrus for auditory spelling and superior temporal gyrus for visual rhyming). In contrast, performance on the auditory spelling task was inversely correlated with activation in the superior temporal gyrus possibly reflecting a greater emphasis on the properties of the perceptual input rather than on the relevant transmodal conversions. Copyright 2003 Wiley-Liss, Inc.
Spatial release from masking based on binaural processing for up to six maskers
Yost, William A.
2017-01-01
Spatial Release from Masking (SRM) was measured for identification of a female target word spoken in the presence of male masker words. Target words from a single loudspeaker located at midline were presented when two, four, or six masker words were presented either from the same source as the target or from spatially separated masker sources. All masker words were presented from loudspeakers located symmetrically around the centered target source in the front azimuth hemifield. Three masking conditions were employed: speech-in-speech masking (involving both informational and energetic masking), speech-in-noise masking (involving energetic masking), and filtered speech-in-filtered speech masking (involving informational masking). Psychophysical results were summarized as three-point psychometric functions relating proportion of correct word identification to target-to-masker ratio (in decibels) for both the co-located and spatially separated target and masker sources cases. SRM was then calculated by comparing the slopes and intercepts of these functions. SRM decreased as the number of symmetrically placed masker sources increased from two to six. This decrease was independent of the type of masking, with almost no SRM measured for six masker sources. These results suggest that when SRM is dependent primarily on binaural processing, SRM is effectively limited to fewer than six sound sources. PMID:28372135
Sunami, Kishiko; Ishii, Akira; Takano, Sakurako; Yamamoto, Hidefumi; Sakashita, Tetsushi; Tanaka, Masaaki; Watanabe, Yasuyoshi; Yamane, Hideo
2013-11-06
In daily communication, we can usually still hear the spoken words as if they had not been masked and can comprehend the speech when spoken words are masked by background noise. This phenomenon is known as phonemic restoration. Since little is known about the neural mechanisms underlying phonemic restoration for speech comprehension, we aimed to identify the neural mechanisms using magnetoencephalography (MEG). Twelve healthy male volunteers with normal hearing participated in the study. Participants were requested to carefully listen to and understand recorded spoken Japanese stories, which were either played forward (forward condition) or in reverse (reverse condition), with their eyes closed. Several syllables of spoken words were replaced by 300-ms white-noise stimuli with an inter-stimulus interval of 1.6-20.3s. We compared MEG responses to white-noise stimuli during the forward condition with those during the reverse condition using time-frequency analyses. Increased 3-5 Hz band power in the forward condition compared with the reverse condition was continuously observed in the left inferior frontal gyrus [Brodmann's areas (BAs) 45, 46, and 47] and decreased 18-22 Hz band powers caused by white-noise stimuli were seen in the left transverse temporal gyrus (BA 42) and superior temporal gyrus (BA 22). These results suggest that the left inferior frontal gyrus and left transverse and superior temporal gyri are involved in phonemic restoration for speech comprehension. Our findings may help clarify the neural mechanisms of phonemic restoration as well as develop innovative treatment methods for individuals suffering from impaired speech comprehension, particularly in noisy environments. © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hassanat, Ahmad B. A.; Jassim, Sabah
2010-04-01
In this paper, the automatic lip reading problem is investigated, and an innovative approach to providing solutions to this problem has been proposed. This new VSR approach is dependent on the signature of the word itself, which is obtained from a hybrid feature extraction method dependent on geometric, appearance, and image transform features. The proposed VSR approach is termed "visual words". The visual words approach consists of two main parts, 1) Feature extraction/selection, and 2) Visual speech feature recognition. After localizing face and lips, several visual features for the lips where extracted. Such as the height and width of the mouth, mutual information and the quality measurement between the DWT of the current ROI and the DWT of the previous ROI, the ratio of vertical to horizontal features taken from DWT of ROI, The ratio of vertical edges to horizontal edges of ROI, the appearance of the tongue and the appearance of teeth. Each spoken word is represented by 8 signals, one of each feature. Those signals maintain the dynamic of the spoken word, which contains a good portion of information. The system is then trained on these features using the KNN and DTW. This approach has been evaluated using a large database for different people, and large experiment sets. The evaluation has proved the visual words efficiency, and shown that the VSR is a speaker dependent problem.
Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode
Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina
2013-01-01
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976
Visual word recognition in deaf readers: lexicality is modulated by communication mode.
Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina
2013-01-01
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.
ERIC Educational Resources Information Center
Saiegh-Haddad, Elinor; Schiff, Rachel
2016-01-01
All native speakers of Arabic read in a language variety that is remarkably distant from the one they use in everyday speech. The study tested the impact of this distance on reading accuracy and fluency by comparing reading of Standard Arabic (StA) words, used in StA only, versus Spoken Arabic (SpA) words, used in SpA too, among Arabic native…
Meinzen-Derr, Jareen; Wiley, Susan; McAuley, Rose; Smith, Laura; Grether, Sandra
2017-11-01
Pilot study to assess the effect of augmentative and alternative communication technology to enhance language development in children who are deaf or hard-of-hearing. Five children ages 5-10 years with permanent bilateral hearing loss who were identified with language underperformance participated in an individualized 24-week structured program using the application TouchChat WordPower on iPads ® . Language samples were analyzed for changes in mean length of utterance, vocabulary words and mean turn length. Repeated measures models assessed change over time. The baseline median mean length of utterance was 2.41 (range 1.09-6.63; mean 2.88) and significantly increased over time (p = 0.002) to a median of 3.68 at final visit (range 1.97-6.81; mean 3.62). At baseline, the median total number of words spoken per language sample was 251 (range 101-458), with 100 (range 36-100) different words spoken. Total words and different words significantly increased over time (β = 26.8 (7.1), p = 0.001 for total words; β = 8.0 (2.7), p = 0.008 for different words). Mean turn length values also slightly increased over time. Using augmentative and alternative communication technology on iPads ® shows promise in supporting rapid language growth among elementary school-age children who are deaf or hard-of-hearing with language underperformance.
Kapnoula, Efthymia C.; McMurray, Bob
2016-01-01
Language learning is generally described as a problem of acquiring new information (e.g., new words). However, equally important are changes in how the system processes known information. For example, a wealth of studies has suggested dramatic changes over development in how efficiently children recognize familiar words, but it is unknown what kind of experience-dependent mechanisms of plasticity give rise to such changes in real-time processing. We examined the plasticity of the language processing system by testing whether a fundamental aspect of spoken word recognition, lexical interference, can be altered by experience. Adult participants were trained on a set of familiar words over a series of 4 tasks. In the high-competition (HC) condition, tasks were designed to encourage coactivation of similar words (e.g., net and neck) and to require listeners to resolve this competition. Tasks were similar in the low-competition (LC) condition, but did not enhance this competition. Immediately after training, interlexical interference was tested using a visual world paradigm task. Participants in the HC group resolved interference to a fuller degree than those in the LC group, demonstrating that experience can shape the way competition between words is resolved. TRACE simulations showed that the observed late differences in the pattern of interference resolution can be attributed to differences in the strength of lexical inhibition. These findings inform cognitive models in many domains that involve competition/interference processes, and suggest an experience-dependent mechanism of plasticity that may underlie longer term changes in processing efficiency associated with both typical and atypical development. PMID:26709587
Acoustic Masking Disrupts Time-Dependent Mechanisms of Memory Encoding in Word-List Recall
Cousins, Katheryn A.Q.; Dar, Jonathan; Wingfield, Arthur; Miller, Paul
2013-01-01
Recall of recently heard words is affected by the clarity of presentation: even if all words are presented with sufficient clarity for successful recognition, those that are more difficult to hear are less likely to be recalled. Such a result demonstrates that memory processing depends on more than whether a word is simply “recognized” versus “not-recognized”. More surprising is that when a single item in a list of spoken words is acoustically masked, prior words that were heard with full clarity are also less likely to be recalled. To account for such a phenomenon, we developed the Linking by Active Maintenance Model (LAMM). This computational model of perception and encoding predicts that these effects are time dependent. Here we challenge our model by investigating whether and how the impact of acoustic masking on memory depends on presentation rate. We find that a slower presentation rate causes a more disruptive impact of stimulus degradation on prior, clearly heard words than does a fast rate. These results are unexpected according to prior theories of effortful listening, but we demonstrate that they can be accounted for by LAMM. PMID:24838269
Acoustic masking disrupts time-dependent mechanisms of memory encoding in word-list recall.
Cousins, Katheryn A Q; Dar, Hayim; Wingfield, Arthur; Miller, Paul
2014-05-01
Recall of recently heard words is affected by the clarity of presentation: Even if all words are presented with sufficient clarity for successful recognition, those that are more difficult to hear are less likely to be recalled. Such a result demonstrates that memory processing depends on more than whether a word is simply "recognized" versus "not recognized." More surprising is that, when a single item in a list of spoken words is acoustically masked, prior words that were heard with full clarity are also less likely to be recalled. To account for such a phenomenon, we developed the linking-by-active-maintenance model (LAMM). This computational model of perception and encoding predicts that these effects will be time dependent. Here we challenged our model by investigating whether and how the impact of acoustic masking on memory depends on presentation rate. We found that a slower presentation rate causes a more disruptive impact of stimulus degradation on prior, clearly heard words than does a fast rate. These results are unexpected according to prior theories of effortful listening, but we demonstrated that they can be accounted for by LAMM.
Can Explicit Training in Cued Speech Improve Phoneme Identification?
ERIC Educational Resources Information Center
Rees, R.; Fitzpatrick, C.; Foulkes, J.; Peterson, H.; Newton, C.
2017-01-01
When identifying phonemes in new spoken words, lipreading is an important source of information for many deaf people. Because many groups of phonemes are virtually indistinguishable by sight, deaf people are able to identify about 30% of phonemes when lipreading non-words. Cued speech (CS) is a system of hand shapes and hand positions used…
ERIC Educational Resources Information Center
Cortés-Monter, Diana R.; Angulo-Chavira, Armando Q.; Arias-Trejo, Natalia
2017-01-01
This study aimed to determine whether the reading skills of third-grade schoolchildren are associated with their preferences for semantic, phonological, and shape competitors (images or printed words) after being exposed to a spoken critical word. Two groups of children participated: skilled readers and less-skilled readers. Through a…
Implicit Meaning in 18-Month-Old Toddlers
ERIC Educational Resources Information Center
Delle Luche, Claire; Durrant, Samantha; Floccia, Caroline; Plunkett, Kim
2014-01-01
A substantial body of evidence demonstrates that infants understand the meaning of spoken words from as early as 6 months. Yet little is known about their ability to do so in the absence of any visual referent, which would offer diagnostic evidence for an adult-like, symbolic interpretation of words and their use in language mediated thought. We…
Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use over Time
ERIC Educational Resources Information Center
Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.
2013-01-01
Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson--Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semi-fixed multi-word units (MWUs), which comprise fixed parts with the potential…
Russian: A Guide to the Spoken Language.
ERIC Educational Resources Information Center
Department of Defense, Washington, DC.
This Russian language guide is of assistance in carrying on simple conversations in Russian and is used in conjunction with records. Russian ranks after Chinese and English as the third most widespread language in the world. All the words and phrases are written in a simplified spelling which is read like English. Useful words and phrases include…
Memory Deficits in Early Infantile Autism: Some Similarities to the Amnesiac Syndrome
ERIC Educational Resources Information Center
Boucher, Jill; Warrington, Elizabeth K.
1976-01-01
Autistic children were compared with control children on tasks in which retention was tested by different methods. In three tests of recall, using named pictures, written words and spoken words as test stimuli, autistic children were impaired in comparison with age-matched normal children and with controls matched for verbal and non-verbal…
2008-04-01
selected as the listener headform for this effort. The HATS has binaural sound quality microphones inserted into the ear canals and rubber pinnae that...Blank 16 APPENDIX - WORD LISTS AND SUBJECT RESPONSES MRT Set 1 (spoken word is in bold type) kick, lick, sick, tick, wick, pick neat, beat , seat, meat
Two-Year-Olds Compute Syntactic Structure On-Line
ERIC Educational Resources Information Center
Bernal, Savita; Dehaene-Lambertz, Ghislaine; Millotte, Severine; Christophe, Anne
2010-01-01
Syntax allows human beings to build an infinite number of new sentences from a finite stock of words. Because toddlers typically utter only one or two words at a time, they have been thought to have no syntax. Using event-related potentials (ERPs), we demonstrated that 2-year-olds do compute syntactic structure when listening to spoken sentences.…
ERIC Educational Resources Information Center
Patro, Chhayakanta; Mendel, Lisa Lucks
2018-01-01
Purpose: The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. Method: Listeners with CIs as well as those with normal hearing (NH)…
ERIC Educational Resources Information Center
Milburn, Trelani F.; Hipfner-Boucher, Kathleen; Weitzman, Elaine; Greenberg, Janice; Pelletier, Janette; Girolametto, Luigi
2017-01-01
Preschool children begin to represent spoken language in print long before receiving formal instruction in spelling and writing. The current study sought to identify the component skills that contribute to preschool children's ability to begin to spell words and write their name. Ninety-five preschool children (mean age = 57 months) completed a…
The role of visual representations during the lexical access of spoken words
Lewis, Gwyneth; Poeppel, David
2015-01-01
Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability - concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. PMID:24814579
Zhuang, Jie; Devereux, Barry J
2017-02-07
As spoken language unfolds over time the speech input transiently activates multiple candidates at different levels of the system - phonological, lexical, and syntactic - which in turn leads to short-lived between-candidate competition. In an fMRI study, we investigated how different kinds of linguistic competition may be modulated by the presence or absence of a prior context (Tyler 1984; Tyler et al. 2008). We found significant effects of lexico-phonological competition for isolated words, but not for words in short phrases, with high competition yielding greater activation in left inferior frontal gyrus (LIFG) and posterior temporal regions. This suggests that phrasal contexts reduce lexico-phonological competition by eliminating form-class inconsistent cohort candidates. A corpus-derived measure of lexico-syntactic competition was associated with greater activation in LIFG for verbs in phrases, but not for isolated verbs, indicating that lexico-syntactic information is boosted by the phrasal context. Together, these findings indicate that LIFG plays a general role in resolving different kinds of linguistic competition.
The role of visual representations during the lexical access of spoken words.
Lewis, Gwyneth; Poeppel, David
2014-07-01
Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.
Temporal lobe networks supporting the comprehension of spoken words.
Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius
2017-09-01
Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Representation of grammatical categories of words in the brain.
Hillis, A E; Caramazza, A
1995-01-01
We report the performance of a patient who, as a consequence of left frontal and temporoparietal strokes, makes far more errors on nouns than on verbs in spoken output tasks, but makes far more errors on verbs than on nouns in written input tasks. This double dissociation within a single patient with respect to grammatical category provides evidence for the hypothesis that phonological and orthographic representations of nouns and verbs are processed by independent neural mechanisms. Furthermore, the opposite dissociation in the verbal output modality, an advantage for nouns over verbs in spoken tasks, by a different patient using the same stimuli has also been reported (Caramazza & Hillis, 1991). This double dissociation across patients on the same task indicates that results cannot be ascribed to "greater difficulty" with one type of stimulus, and provides further evidence for the view that grammatical category information is an important organizational principle of lexical knowledge in the brain.
Word Learning Deficits in Children With Dyslexia
Hogan, Tiffany; Green, Samuel; Gray, Shelley; Cabbage, Kathryn; Cowan, Nelson
2017-01-01
Purpose The purpose of this study is to investigate word learning in children with dyslexia to ascertain their strengths and weaknesses during the configuration stage of word learning. Method Children with typical development (N = 116) and dyslexia (N = 68) participated in computer-based word learning games that assessed word learning in 4 sets of games that manipulated phonological or visuospatial demands. All children were monolingual English-speaking 2nd graders without oral language impairment. The word learning games measured children's ability to link novel names with novel objects, to make decisions about the accuracy of those names and objects, to recognize the semantic features of the objects, and to produce the names of the novel words. Accuracy data were analyzed using analyses of covariance with nonverbal intelligence scores as a covariate. Results Word learning deficits were evident for children with dyslexia across every type of manipulation and on 3 of 5 tasks, but not for every combination of task/manipulation. Deficits were more common when task demands taxed phonology. Visuospatial manipulations led to both disadvantages and advantages for children with dyslexia. Conclusion Children with dyslexia evidence spoken word learning deficits, but their performance is highly dependent on manipulations and task demand, suggesting a processing trade-off between visuospatial and phonological demands. PMID:28388708
Saiegh-Haddad, Elinor; Ghawi-Dakwar, Ola
2017-01-01
The study tested the impact of the phonological and lexical distance between a dialect of Palestinian Arabic spoken in the north of Israel (SpA) and Modern Standard Arabic (StA or MSA) on word and non-word repetition in children with specific language impairment (SLI) and in typically developing (TD) age-matched controls. Fifty kindergarten children (25 SLI, 25 TD; mean age 5;5) and fifty first grade children (25 SLI, 25 TD; mean age 6:11) were tested with a repetition task for 1–4 syllable long real words and pseudo words; Items varied systematically in whether each encoded a novel StA phoneme or not, namely a phoneme that is only used in StA but not in the spoken dialect targeted. Real words also varied in whether they were lexically novel, meaning whether the word is used only in StA, but not in SpA. SLI children were found to significantly underperform TD children on all repetition tasks indicating a general phonological memory deficit. More interesting for the current investigation is the observed strong and consistent effect of phonological novelty on word and non-word repetition in SLI and TD children, with a stronger effect observed in SLI. In contrast with phonological novelty, the effect of lexical novelty on word repetition was limited and it did not interact with group. The results are argued to reflect the role of linguistic distance in phonological memory for novel linguistic units in Arabic SLI and, hence, to support a specific Linguistic Distance Hypothesis of SLI in a diglossic setting. The implications of the findings for assessment, diagnosis and intervention with Arabic speaking children with SLI are discussed. PMID:29213248
Saiegh-Haddad, Elinor; Ghawi-Dakwar, Ola
2017-01-01
The study tested the impact of the phonological and lexical distance between a dialect of Palestinian Arabic spoken in the north of Israel (SpA) and Modern Standard Arabic (StA or MSA) on word and non-word repetition in children with specific language impairment (SLI) and in typically developing (TD) age-matched controls. Fifty kindergarten children (25 SLI, 25 TD; mean age 5;5) and fifty first grade children (25 SLI, 25 TD; mean age 6:11) were tested with a repetition task for 1-4 syllable long real words and pseudo words; Items varied systematically in whether each encoded a novel StA phoneme or not, namely a phoneme that is only used in StA but not in the spoken dialect targeted. Real words also varied in whether they were lexically novel, meaning whether the word is used only in StA, but not in SpA. SLI children were found to significantly underperform TD children on all repetition tasks indicating a general phonological memory deficit. More interesting for the current investigation is the observed strong and consistent effect of phonological novelty on word and non-word repetition in SLI and TD children, with a stronger effect observed in SLI. In contrast with phonological novelty, the effect of lexical novelty on word repetition was limited and it did not interact with group. The results are argued to reflect the role of linguistic distance in phonological memory for novel linguistic units in Arabic SLI and, hence, to support a specific Linguistic Distance Hypothesis of SLI in a diglossic setting. The implications of the findings for assessment, diagnosis and intervention with Arabic speaking children with SLI are discussed.
Distraction control processes in free recall: benefits and costs to performance.
Marsh, John E; Sörqvist, Patrik; Hodgetts, Helen M; Beaman, C Philip; Jones, Dylan M
2015-01-01
How is semantic memory influenced by individual differences under conditions of distraction? This question was addressed by observing how participants recalled visual target words--drawn from a single category--while ignoring spoken distractor words that were members of either the same or a different (single) category. Working memory capacity (WMC) was related to disruption only with synchronous, not asynchronous, presentation, and distraction was greater when the words were presented synchronously. Subsequent experiments found greater negative priming of distractors among individuals with higher WMC, but this may be dependent on targets and distractors being comparable category exemplars. With less dominant category members as distractors, target recall was impaired--relative to control--only among individuals with low WMC. The results highlight the role of cognitive control resources in target-distractor selection and the individual-specific cost implications of such cognitive control. PsycINFO Database Record (c) 2015 APA, all rights reserved.
The role of beat gesture and pitch accent in semantic processing: an ERP study.
Wang, Lin; Chu, Mingyuan
2013-11-01
The present study investigated whether and how beat gesture (small baton-like hand movements used to emphasize information in speech) influences semantic processing as well as its interaction with pitch accent during speech comprehension. Event-related potentials were recorded as participants watched videos of a person gesturing and speaking simultaneously. The critical words in the spoken sentences were accompanied by a beat gesture, a control hand movement, or no hand movement, and were expressed either with or without pitch accent. We found that both beat gesture and control hand movement induced smaller negativities in the N400 time window than when no hand movement was presented. The reduced N400s indicate that both beat gesture and control movement facilitated the semantic integration of the critical word into the sentence context. In addition, the words accompanied by beat gesture elicited smaller negativities in the N400 time window than those accompanied by control hand movement over right posterior electrodes, suggesting that beat gesture has a unique role for enhancing semantic processing during speech comprehension. Finally, no interaction was observed between beat gesture and pitch accent, indicating that they affect semantic processing independently. © 2013 Elsevier Ltd. All rights reserved.
Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area
Yoncheva, Yuliya N.; Zevin, Jason D.; Maurer, Urs
2010-01-01
Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level–dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers. PMID:19571269